Summary
Artificial intelligence is advancing rapidly, with new and more powerful models being released almost every week. However, the leader of Moody’s argues that simply having a better model is not enough to make AI successful in the business world. The real challenge facing the industry today is a lack of trust, which cannot be fixed by technology alone. To make AI truly useful for high-stakes decisions, companies must focus on the quality and connection of the data they use.
Main Impact
The primary impact of this shift is that AI models are becoming common tools, or commodities, that many companies can access. Because many models now perform at a similar level, the technology itself is no longer a major advantage. Instead, the real difference between success and failure lies in "connected intelligence." This means using data that is organized and drawn from many reliable sources to give the AI a complete picture of the world. Without this foundation, AI projects often fail to provide any real value to businesses.
Key Details
What Happened
Recently, Meta introduced its latest AI model called Muse Spark, which performs as well as other top models in the industry. While this is an impressive technical achievement, it highlights a growing trend: as more models enter the market, they start to look and act the same. Rob Fauber, the CEO of Moody’s, points out that the focus should move away from the "car" (the AI model) and toward the "navigation system" (the data). If an AI uses outdated or unorganized information, it will not be reliable, no matter how fast or powerful the model is.
Important Numbers and Facts
The stakes for getting AI right are very high, especially in the financial sector. According to research from MIT, about 95% of AI pilot programs fail to create a measurable impact for businesses. A major reason for this high failure rate is a weak data foundation. Furthermore, public trust in major institutions is falling globally. If companies use AI to make big decisions about loans, insurance, or safety without using verified data, they risk losing even more public confidence. Leaders in the tech world, including the CEO of NVIDIA, have noted that structured and organized data is the only way to find the "ground truth" for AI systems.
Background and Context
In the past, data was often kept in separate "silos," meaning different departments or systems did not share information. In today's world, risks are more connected than ever before. For example, a massive storm in one part of the world can break a supply chain, which then hurts the economy and changes how banks lend money. This is what experts call "Exponential Risk." Because these problems are all linked, an AI cannot give a good answer if it only looks at one small piece of information. It needs to see how climate, credit, and legal rules all affect each other at the same time.
Public or Industry Reaction
The tech and financial industries are starting to realize that scraping the general internet for information is not enough for professional AI use. Industry leaders are calling for data that is "normalized" and "calibrated," which means it has been cleaned and checked to match how the real world works. This process is difficult and takes a lot of work, but it is the only way to make AI decisions that can be explained to government regulators, company boards, and shareholders. There is a growing demand for AI outputs that are "defensible," meaning the company can prove why the AI made a specific choice.
What This Means Going Forward
As we move forward, the focus of AI development will likely shift from building bigger models to building better data pipelines. Companies will need to ask their data teams if their information is reliable and tested against real-world outcomes. The goal is to move from being reactive—waiting for a problem to happen—to being proactive by spotting risks before they cause damage. Organizations that can successfully combine their own internal data with high-quality third-party information will be the ones that make the best decisions.
Final Take
The true power of AI is not found in the code itself, but in the trust we can place in its results. For over a hundred years, markets have relied on transparent and independent analysis to function correctly. AI does not change this basic need; it simply makes the cost of being wrong much higher. To succeed, leaders must ensure that their AI systems are fed with connected intelligence that reflects the complex reality of our modern world. Trust is the most valuable asset a company has, and in the age of AI, that trust is built on a foundation of solid data.
Frequently Asked Questions
Why are so many AI projects failing?
Most AI projects fail because they are built on a weak data foundation. Even the most advanced AI models cannot produce useful results if the information they are given is unorganized, incomplete, or incorrect.
What is connected intelligence?
Connected intelligence is the practice of gathering and organizing data from many different sources so that an AI can see the full picture of a situation. This allows the AI to understand how different risks, like weather and finance, affect one another.
Why is trust more important than the AI model itself?
AI models are becoming very similar and easy to access. The real advantage for a company comes from being able to trust the AI's decisions. In high-stakes fields like banking and insurance, a "maybe" answer is not good enough; the results must be reliable and easy to defend.