Trustible

Trustible

Technology, Information and Internet

Washington , DC 1,551 followers

Trustible is the best way to document, manage, and report on your AI Governance strategy to build trust and reduce risk.

About us

At Trustible, our mission to help organizations manage and mitigate AI risk, build trust, and accelerate Responsible AI development. Trust in AI systems, and the organizations deploying them, is the single most important factor that will drive successful AI adoption. The way AI is developed and deployed is going to go through a radical transformation, moving away from the traditional software development lifecycle of moving fast and breaking things, and more towards a process of how other high risk products are developed, such as medical devices, where potential harms and risks are closely tracked and mitigated. Trustible can guide organizations through that change and help enable them with the workflows, documentation tools and reporting capabilities necessary for that more regulated future. Building trust and deploying AI ethically goes beyond just good governance and regulatory compliance. Trustible helps you say it, do it, and prove it.

Website
https://www.trustible.ai/
Industry
Technology, Information and Internet
Company size
2-10 employees
Headquarters
Washington , DC
Type
Privately Held

Locations

Employees at Trustible

Updates

  • View organization page for Trustible, graphic

    1,551 followers

    We're thrilled to share our latest 2-minute video that articulates the vision for how Trustible supports customers operationalizing their AI governance strategy. Much like seatbelts and brakes enabled cars to go faster, so can AI governance accelerate responsible AI development & deployment. The Trustible Responsible AI Governance platform helps customers: 🤝 Demonstrate trustworthy practices and governance controls to your customers or regulators 🛡️ Manage and measure the benefits and risks of your AI systems ⚖️ Ensure compliance with AI regulations and international standards 💡Leverage best-in-class recommendations, insights, and best practices on AI governance to make sure you’re managing both the speed of change of the technology and the regulatory environment Whether you're looking for an out-of-the-box solution or a customizable platform to meet your unique needs, Trustible is your partner in navigating the complexities of AI with confidence. YouTube link 👉 https://lnkd.in/dd4Q77g8 #AI #ResponsibleAI #AIGovernance

  • View organization page for Trustible, graphic

    1,551 followers

    On July 11, 2024, the New York State Department of Financial Services (NY DFS) released its final circular letter on the use of external consumer data and information sources (ECDIS), AI systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts. While a circular letter is not a regulation per se, but rather a formalized interpretation of existing laws and regulations by the NY DFS, it shows continued efforts from State governments and insurance commissions to regulate the use of AI. In our latest blog post, we summarize everything you need to know about the circular letter, as well as how its requirements overlap (or don't) with other standards such as the Colorado Regulation 10-1-1 or the National Association of Insurance Commissioners (NAIC) Final Model Bulletin. Link in comments. #aiinsurance #insurance

    • No alternative text description for this image
  • View organization page for Trustible, graphic

    1,551 followers

    Who's responsible if an AI-powered feature leads to financial damages? Our Co-Founder & CTO Andrew Gamino-Cheong was recently quoted in this US News & World Report article highlighting some of the challenges in liability at the intersection of Section 230 and Generative AI output. "A GenAI search tool that only shares one answer has to have a higher liability standard than a platform that shares 10 options. This is especially true if the GenAI system can't easily point to the source of its information and answer," Gamino-Cheong said. He explained that "the AI system is essentially doing the selection of the single best answer, and that unexplainable selection is an editorial decision that typically hasn't been clearly covered by Section 230." Read his full comments here -> https://lnkd.in/egHUytst

    What Section 230 Means for Your Money

    What Section 230 Means for Your Money

    money.usnews.com

  • View organization page for Trustible, graphic

    1,551 followers

    Our latest Trustible Newsletter edition is out! In today's edition, we cover: 1️⃣ Overturning the Chevron Doctrine wasn't the only major ruling the Supreme Court made that could affect AI regulation. 2 other cases make it easier to suit federal agencies for overstepping, and force jury trials for agency enforcement actions. 2️⃣ Looking at the risks of AI isn't enough, businesses are also increasingly focused on measuring the potential benefits, and trying to calculate the ROI of AI investments. (Trustible can help with this!) 3️⃣ There are 3 clear steps organizations can take today to prepare for EU AI Act compliance (which starts in Feb 2025): Draft your AI policies, build an AI use case inventory, and implement AI literacy & compliance training. 4️⃣ Retrieval Augmented Generation (RAG) is a useful architectural pattern, but not a silver bullet, and can introduce additional security, privacy, and performance issues. 5️⃣ Trump's selection of J.D Vance as his running mate, and recent endorsements by major tech leaders like Elon Must and Marc Andreessen, means the AI policy proposals in a second Trump administration could look drastically different than current proposals (the regulations may be heavily pro-innovation, and actually do more to promote AI development and adoption). Link in comments.

  • View organization page for Trustible, graphic

    1,551 followers

    In our second of three installments regarding AI Policies, we do a deep dive into drafting an organizational AI use policy. A recent study from Microsoft/Linkedin found that most employees are using AI during the workday, often without informing their supervisors, leading to potential risks like data privacy concerns and unmonitored AI harms. This widespread use of ‘shadow AI’ underscores the need for clear organizational guidelines. Our latest blog post explores the importance of implementing an AI use policy to provide actionable guidance on appropriate AI usage, improve transparency, and leverage the benefits of AI in the workplace. Read our post here https://lnkd.in/e9CJVYnh

  • View organization page for Trustible, graphic

    1,551 followers

    Trustible is thrilled to announce that it has been selected as a recipient of the prestigious Google for Startups Latino Founders Fund. This funding from Google is a testament to Trustible’s innovation, growth potential, and continued leadership in the field of AI governance. Trustible's founding was driven by a commitment to ensure that AI serves as a force for equity and opportunity – especially for Latino and other underrepresented communities (read more about our founding story here https://lnkd.in/er9-wfk5). Google’s support validates and strengthens our mission of enabling Trustworthy and Responsible AI. The Google for Startups Latino Founders Fund is an initiative specifically designed to support tech startups led by Latino founders. Trustible will receive $150,000 in non-dilutive funding, personalized mentorship from Google experts and top local mentors, and a community network of like-minded entrepreneurs. Being selected as part of this program means that we have access to a wealth of resources and guidance to help us navigate the challenges and opportunities that come with scaling a startup – alongside world-class Responsible AI resources. Google for Startups Founders Fund has awarded over $50 million to date supporting Black and Latino-led startups, who have gone on to raise more than $590 million in follow-on funding. Trustible is committed to leveraging this opportunity to drive positive change in the AI landscape and to support our clients in their journey toward Responsible AI adoption. Trustible founders, Gerald Kierce Iturrioz and Andrew Gamino-Cheong, would like to extend their heartfelt thanks to Google for believing in our vision and supporting our mission. Google for Startups press release https://lnkd.in/g3FZbetR Trustible press release https://lnkd.in/gJFP4yym

    • No alternative text description for this image
  • View organization page for Trustible, graphic

    1,551 followers

    AI Policies can generally be divided into three categories: 1) Comprehensive Organizational AI Policy – includes organizational principles, roles and processes 2) AI Use Policy – outlines what kinds of tools and use cases are allowed and what precautions employees must take 3) Public-Facing AI Policy – outlines core ethical principles the organization takes and their stance on key AI policy stances In the first of a series of 3 AI Policy posts, we take a deep dive into drafting a comprehensive AI Policy. We outline 14 key considerations and tradeoffs you need to consider to ensure your AI deployment aligns with your organization's principles, complies with regulatory standards, and mitigates potential risks. Read more here -> https://lnkd.in/evrr2mBb

    • No alternative text description for this image
  • View organization page for Trustible, graphic

    1,551 followers

    Trustible is thrilled to announce the addition of three new members to the Trustible Advisory Board: Larry Quinlan, Jason Hirsch, and Francisco Sanchez. Their deep expertise in AI, enterprise technology, regulatory strategy, and product counseling will guide Trustible customers and leadership on global challenges at the intersection of technology, law, and government policy. Read our press release here -> https://lnkd.in/eAGFvNxk 

    • No alternative text description for this image

Similar pages

Funding

Trustible 2 total rounds

Last Round

Grant

US$ 150.0K

See more info on crunchbase