Microsoft’s Post

View organization page for Microsoft, graphic

23,050,639 followers

When an AI model "hallucinates," it deviates from the data given to it by changing or adding additional information not contained in it. While there are times this can be beneficial, Microsoft researchers have been putting large language models to the test to reduce instances where information is fabricated. Find out how we're creating solutions to measure, detect and mitigate the phenomenon as part of its efforts to develop AI in a safe, trustworthy and ethical way. https://msft.it/6049Y0m41

  • No alternative text description for this image

A excellent way to see the evolution very fast

Ramesh Jadhav

Manager Group Human Resource. SAP HCM|SuccessFactor|HRBP|HR strategy and policies|Compensation and benefits|PMGM|EC|CLMS|DIGITIZATION HR| INNOVATION AND TRANSFORMATION IN HR|COMPLIANCE MANAGEMENT|LEGAL.

1mo

Very informative

That’s fantastic to hear, Microsoft! Reducing instances of AI hallucination is crucial for maintaining the integrity and reliability of AI systems. By creating solutions to measure, detect, and mitigate fabricated information, you are setting a high standard for AI development. This approach not only enhances the accuracy and trustworthiness of large language models but also ensures their safe and ethical use in various applications. Kudos to your researchers for their dedication to advancing AI technology in a responsible manner. Looking forward to seeing the positive impact of these innovations!

MERKA PHOENIX

Software Engineer w/ 16+ years delivering Technology Transformation, innovations, & modular code for strategic long-term projects

1mo

Another good breadcrumb along the discussion path on memory is this one: https://en.wikipedia.org/wiki/Fuzzy-trace_theory

Saman Akbarian

BI Developer/Data Engineer på Sogeti

1mo

It's a feature and not a bug, let it hallucinate, we all do it from time to time.

MERKA PHOENIX

Software Engineer w/ 16+ years delivering Technology Transformation, innovations, & modular code for strategic long-term projects

1mo

In some ways, this AI model mimics how our own brains piece together what we think is our memory of events: https://www.scientificamerican.com/article/perception-and-memory/

Ujjwal Dhungana

Electrical Engineering Student | Mechanical Engineering BSME Graduate | RAMS & T Engineer | Systems Engineer

1mo

Our professor once explained AI hallucinations. I love it when academia presents industry topics. 

Thomas Mercier

Business Intelligence | Data | Finance | Power BI | Azure Databricks | Mémoire de fin d'étude : Cloud & Souveraineté Numérique

1mo

Any preference: Florence or OpenCV?

I'll keep this in mind

Charonne Mose

Principal at C + AI at Microsoft

1mo

Amazing and so proactive!

See more comments

To view or add a comment, sign in

Explore topics