10x Disruptors: Accessing Data in the Enterprise
Unlocking outsized improvements in the enterprise with AI and hyper-scale data processing.
This is the second in our series about Massive’s focus in 2025. To see more of our thoughts and follow along, please subscribe.
While most of the news and hype around AI has been centered on the large model providers like OpenAI, Anthropic, Google, and now, DeepSeek, Massive believes substantial enterprise value is about to be unlocked through task and industry-specific AI-first companies as well as hyper-scale data processors. There are two major developments taking place. The first one is around tighter focus in AI, and the other is around step-function efficiency gains on the data side.
For years, enterprises have invested in traditional SaaS, BI tools, analytics platforms, and data lakes—yet, for most enterprises, accessing and acting on data has remained a slow, complex, and fragmented process. Many data insights remained locked behind dashboards, IT requests, or structured reports. Or it is buried and inaccessible due to the sheer size of the data, the specialized language, process needs, and complex structures within each industry and function. Traditional tools struggle to bridge this gap, leaving valuable data locked behind a wall of complexity or the lift required to manage such large-scale data lakes. Historically, the solution was always “buy more software”, which only consumed more time and money across the organization or its consultants.
This is where hyper-scale data processors, and industry and task-specific Large Language Models (LLMs) become crucial. They are specialized gold miners equipped to navigate the unique terrain of different domains and aren’t overwhelmed by petabytes of data.
There is a lot of talk about “Agentic AI” right now, and we see these task and process-aware AI services as part of a fabric of Agents that can interact, process, and execute tasks such as:
Contextual Understanding and Deep Domain Expertise: These models don't just understand keywords; critically, they learn the jargon and understand the nuances and context within which those keywords are used. They can differentiate between similar words and terms based on the specific use case against which they are trained - they understand what the user is ‘trying’ to accomplish. They can identify valuable data and insights amongst the noise.
Generating Targeted Insights and Refining the Raw Material: Industry-specific LLMs don't just present data; they refine it, generating insights tailored to the specific needs of the industry. Data is only as powerful as our ability to use it. Significant acceleration from input to insights is what is compelling.
Automating Specialized Tasks and Streamlining the Mining Process: These LLMs, with agents talking to agents across the enterprise, can automate complex tasks specific to their domain.
Massive sees these AI-native companies focused on highly specific industry or task-related pain points as driving a new Golden Age of industry-specific value. By unlocking this domain-specific data gold, these companies will empower enterprises to:
Make better decisions faster: With access to relevant, contextualized insights, employees can make more informed decisions, leading to improved outcomes. They can finally leverage all of that data infrastructure effectively.
Boost productivity: Automating specialized and repetitive tasks or streamlining slow-moving, multi-stakeholder systems frees up employees to focus on higher-value work.
Drive innovation: By uncovering hidden patterns and trends, LLMs can help identify new opportunities for growth and innovation.
Gain a competitive edge: Enterprises that effectively leverage their data gold will be better positioned to compete in their respective markets.
At the same time, the engines driving this innovation are creating data sets that are overwhelming existing infrastructure. And here, too, there is true innovation, unlocking the right data insight at the right time while decreasing the costs of both processing and energy consumption. Similar to how websites overwhelmed with visitors can become unavailable or extremely slow (now a solved problem), the hyper-scale data world also needs solutions that simultaneously improve performance while handling exponentially growing datasets.
Understanding the language, context, and tasks of each industry or business process and the ability to access the needed information quickly and efficiently transforms how enterprises access and leverage data, paving the way for a new era of data-driven success within each vertical. When domain-specific agentic AI is applied to narrow and deep data sets, and the ability to access and process that data isn’t a huge time and energy cost, the results can 10x the output speed, inferences, and decision support. The ripple effects of this across a large organization can be exponential.
AI is already changing the world, and yet we know it is still in the early innings. We are also seeing an explosion of hyper-scale data centers and data platforms, already straining under the cost, workload, and energy demands. The data itself is growing due to advances in adjacent technologies, the amount we can capture and the fidelity of this data just keeps growing.
We are interested in companies applying Task and Process-Specific LLMs in novel ways, as well as innovative data processing technologies being applied against large markets that will result in substantial breakthroughs in efficiency, knowledge transfer, and discovery. These companies will 10x what is possible in their respective domains when they achieve commercial scale.
Key Portfolio Investments:
I have been integrating LLMs into my daily workflow. I do a lot of creative development that benefits greatly from a swift iterative approach. In my world, versioning fully developed concepts is key to understanding the overall picture and LLMs do in fact 10x or more the speed of these cycles.
With this type of efficiency comes a new issue... sure from the top level down this tool empowers me, the creative executive, that knows what I'm looking for, and can make experience based judgments to know that what is being presented to me, by the LLMs, is useful or correct. But... How did I become that person that knows things and can apply judgment? I was the assistant to brilliant minds before me and had the institutional knowledge, beat into me... er, bestowed upon me to get to where I am today.
Now, what happens to the next generation of creative executives, the current assistants, that fall victim to these new efficient LLM workflow approaches. How do I replace myself? How do I make new creatives? How do I pass down institutional knowledge?
Society prospers when the old timers sow seeds they will never reap… I’ve been thinking about this a lot lately and I am working on developing a more apprentice type of learning structure…
We are in exciting times folks!