The "pro-code" camp is right about the flaws. Spreadsheets lack version control, they are prone to "broken link" nightmares, and they struggle with scale. But the "pro-grid" camp is right about the human element. You cannot "feel" the data in a Python script the way you can in a cell.
At NeutronTech (neutrontech.ai), we agree that spreadsheets are often "janky software," but we also believe the grid is an irreplaceable cognitive interface for auditing and iteration.
We didn't build a better spreadsheet; we built a software engine that looks like a spreadsheet.
- NeutronGrid isn't a web of fragile references. It’s powered by a data engine.
- The Grid is an IDE. We kept the grid because it is the best UI ever designed for data interaction. It’s not "shadow IT." It’s a visual interface for high-performance software.
- The AI is a bridge. Because it is structured like real software, our AI doesn’t just "hallucinate" numbers. It uses the dedicated data engine, validates against 3,000+ tests, and provides answers you can actually trust.
- It's built for Apple Silicon. We optimized by using Metal -- for real GPU/hardware accelerated analytics and 20 chart types with 3D rendering.
It's wicked fast, AI understands your cell reference and data structure, and it's currently in beta test.
Andrew Chen is directionally right—but the future isn’t spreadsheets vs code, it’s convergence. Tools like NeutronGrid from Neutrontech.ai treat the grid as an interface, not the logic layer—combining the inspectability and intuition of Microsoft Excel with the power, testing, and scalability of software. Instead of replacing spreadsheets, this approach compiles them into AI-native, auditable systems—preserving how humans think while unlocking what code can do.
-Grid as an interface, not the source of truth
-Code as the execution layer, not the user experience
-AI as the translator between intent, logic, and output.
NeutronGrid is effectively turning spreadsheets into:
1. Inspectable, testable systems (like software)
2. Composable building blocks (not fragile cell webs)
3. AI-co-authored models that remain fully auditable.
We're just few days away from launching this revolutionary platform.
I think the real shift isn’t “Excel vs AI”, it’s where understanding happens. AI struggles with PDFs and spreadsheets. It works best when the data is already structured with context and meaning attached.
So instead of replacing Excel, the opportunity is building that intermediate layer and moving the intelligence upstream. More like a contextualised data layer that is used to power these finance workflows that have been primarily done on Excel
I recently built my financial model using Excel and converted it to an app using AI. The web version provided immediate access to important data and quick changing of inputs or views via controls in a sidebar. Personally, I like it better than the spreadsheet version, but it does create an issue for the person viewing the data. The coded version requires the end user to trust your calculations are right because most of them will not have access to the code or simply can't read it. The spreadsheet allows them to simply click into a cell to review your logic.
A point that seems to be missing: you can't use AI effectively unless you're qualified to judge the results.
Having AI help you build a spreadsheet: sure, as long as you check those formulas. Which you understand because, as you said, it doesn't take long to learn spreadsheets.
Having AI help you build code: sure, if you're a programmer. That's a different time commitment.
Andrew Chen occasionally makes a post that directly reflects a project I'm actually working on.
In the bay area, people pass around spreadsheets to find side parties and events for things like Tech Week. You spend hours manually browsing these lists.
I wrote an AI that allows you to type a prompt describing yourself and your tech event goals and it ranks the top tech events for your need. Example prompt - "I am a founder in the agricultural AI space, and I went to UC Berkeley" The AI will find an event such as "Deep Farm Tech, Hosts: Former Berkeley Alum X Ventures"
I also plan to adopt this tool for tech week. In terms of a use case that directly applies to a16z/speedrun and this post, this AI TOOL KILLS EVENT SPREADSHEETS and is directly a more efficient way for founders, investors, recruiters, and others to find a16z tech week events.
This tool is just a surface embed for a deeper AI Agent platform that I showed Andrew in person.... The character in the video is a powerful AI Agent that has the ability to help the user control the website it is embedded in.
Bay-Events : A website to find all tech events with AI, powered by Yakimo AI. Any customer can create a site like Bay-Events and power it with Yakimo.
Yakimo AI : A powerful cross surface AI agent persona platform, with memory, integrations, embeds, personalities, custom faces and automatic agent to agent communication.
Before long, the interface for essentially all data analysis will be a light wrapper around an SDM estimator over a large neural network, providing robust uncertainty estimation + interpretability-by-exemplar. This won't be for reasons of jumping on the bandwagon of new tech or trying something cool, but simply because this approach will outstrip the human error of the analyst interacting with the spreadsheet (or other programming language), for both the output AND understanding of the built model, including for non-experts and downstream consumers of the analyses.
In other words, the disruption from neural networks for data analysis will come not just from the raw marginal accuracy, but that the uncertainty over the prediction can be robustly estimated over high-dimensional inputs, and that we can map the provenance of the output back to the data.
This is a more sophisticated version of Garbage In, Garbage Out (GIGO).
Stripping away the jargon, this argument is:
1) The Accuracy: Better math (neural networks) + better data = better answers.
2) The "Uncertainty" part: The AI can tell you, "I'm 80% sure about this number."
3) The "Provenance" part: You can trace the answer back to the specific data points that created it.
You are arguing for a "black box" that is slightly more transparent. However, if the input is garbage, the "robust uncertainty estimation" will just tell you—with great mathematical precision—that the output is also garbage.
Thanks for the follow-up. I'm not sure I follow: If you have a model that alerts the analyst that the output is unreliable when the input is nonsense/garbage, that's a really good thing (and traditionally, has been very difficult behavior to back out from learned models), because then the human user knows to look at the data and not rely on the output. (The alternative where the model gives a highly confident wrong answer over the covariate/distribution-shifted data is highly problematic.)
(Also, just to be clear, this is not the behavior one gets from LM API's today, which do not have this behavior/modeling. The typical chat LM will just hallucinate and proceed to give the analyst highly confident wrong answers, which is a deal breaker for data analysis, except for some narrow cases with a human carefully checking every step, essentially using it as a retrieval system.)
The related way I think about it: The inflection point comes when the LM+estimator is a more reliable estimate of distribution shifts than the analyst can build via linear models (or related) via the typical spreadsheet functions.
There’s a real academic elegance to what you’re describing--the idea of a model with that level of self-awareness is honestly fascinating to think about.
But as a systems engineer who spends my days under the hood of AI architecture, I tend to look at it through a slightly more pragmatic lens. Even with my tendency for taking the scenic route in a conversation, as a cornbread/sweet tea raised Southern boy, when it comes to systems design, I’m a big believer in the shortest path to the truth.
To me, a 'robust uncertainty estimation' feels a bit like a sophisticated check-engine light. It’s certainly a breakthrough to have a model that admits when it’s lost, but in a high-stakes production environment, 'I’m 80% sure we’re off track' doesn't quite get the plane on the ground. We aren't really looking for a model that can report its own confusion; we’re building for a world where the confusion isn't allowed to reach the output in the first place.
At my startup, we’ve found it’s often more reliable to let the LLM be the hands rather than the brain. We use a deterministic engine (like DuckDB) to handle the logic, and let AI write the code to query it. It might not be as poetic as a SDM estimator, but it gives us a 'Glass Box' where a human can audit and own the result (human-in-the-loop).
The real inflection point for me isn't when a model can guess its own errors better than an analyst. It’s when we can stop asking the user to 'trust' a probability score and just give them a tool they can verify. In the trenches, a 20% chance of being wrong is just a $1M mistake waiting to happen. We’d rather give the user the wrench than a weather report.
I’m sure the high-dimensional mapping work is deeply rewarding. However, accuracy is a math problem, and conviction is a UI problem. We’re solving for the latter.
How about enhancing the experience before completely wiping it away. The end user of excel is generally not the creator, it's the executives that will ask 'what-ifs' and 'what levers can I pull'?
The value is then the user can reliably (key word reliably) update those changes and verify the calculations before a multi-million dollar (or billion) deal is decided on those data parameters.
Has anyone in the thread actually uploaded an Excel spreadsheet into an AI chatbot and asked it questions? AI hallucinates a lot and gets the simplest stuff wrong (e.g. wrong row). It's not ready for prime time, not in the slightest. (investment analyst here)
The debate is stuck on UI when the real problem is the logic layer.
Peters is right that front office finance can't accept 95% correctness. But that's an argument for determinism and making logic explicit and auditable, not for keeping it buried in cell references.
The grid can stay. The logic doesn't have to be invisible.
The "pro-code" camp is right about the flaws. Spreadsheets lack version control, they are prone to "broken link" nightmares, and they struggle with scale. But the "pro-grid" camp is right about the human element. You cannot "feel" the data in a Python script the way you can in a cell.
At NeutronTech (neutrontech.ai), we agree that spreadsheets are often "janky software," but we also believe the grid is an irreplaceable cognitive interface for auditing and iteration.
We didn't build a better spreadsheet; we built a software engine that looks like a spreadsheet.
- NeutronGrid isn't a web of fragile references. It’s powered by a data engine.
- The Grid is an IDE. We kept the grid because it is the best UI ever designed for data interaction. It’s not "shadow IT." It’s a visual interface for high-performance software.
- The AI is a bridge. Because it is structured like real software, our AI doesn’t just "hallucinate" numbers. It uses the dedicated data engine, validates against 3,000+ tests, and provides answers you can actually trust.
- It's built for Apple Silicon. We optimized by using Metal -- for real GPU/hardware accelerated analytics and 20 chart types with 3D rendering.
It's wicked fast, AI understands your cell reference and data structure, and it's currently in beta test.
Andrew Chen is directionally right—but the future isn’t spreadsheets vs code, it’s convergence. Tools like NeutronGrid from Neutrontech.ai treat the grid as an interface, not the logic layer—combining the inspectability and intuition of Microsoft Excel with the power, testing, and scalability of software. Instead of replacing spreadsheets, this approach compiles them into AI-native, auditable systems—preserving how humans think while unlocking what code can do.
-Grid as an interface, not the source of truth
-Code as the execution layer, not the user experience
-AI as the translator between intent, logic, and output.
NeutronGrid is effectively turning spreadsheets into:
1. Inspectable, testable systems (like software)
2. Composable building blocks (not fragile cell webs)
3. AI-co-authored models that remain fully auditable.
We're just few days away from launching this revolutionary platform.
I think the real shift isn’t “Excel vs AI”, it’s where understanding happens. AI struggles with PDFs and spreadsheets. It works best when the data is already structured with context and meaning attached.
So instead of replacing Excel, the opportunity is building that intermediate layer and moving the intelligence upstream. More like a contextualised data layer that is used to power these finance workflows that have been primarily done on Excel
I recently built my financial model using Excel and converted it to an app using AI. The web version provided immediate access to important data and quick changing of inputs or views via controls in a sidebar. Personally, I like it better than the spreadsheet version, but it does create an issue for the person viewing the data. The coded version requires the end user to trust your calculations are right because most of them will not have access to the code or simply can't read it. The spreadsheet allows them to simply click into a cell to review your logic.
A point that seems to be missing: you can't use AI effectively unless you're qualified to judge the results.
Having AI help you build a spreadsheet: sure, as long as you check those formulas. Which you understand because, as you said, it doesn't take long to learn spreadsheets.
Having AI help you build code: sure, if you're a programmer. That's a different time commitment.
Andrew Chen occasionally makes a post that directly reflects a project I'm actually working on.
In the bay area, people pass around spreadsheets to find side parties and events for things like Tech Week. You spend hours manually browsing these lists.
I wrote an AI that allows you to type a prompt describing yourself and your tech event goals and it ranks the top tech events for your need. Example prompt - "I am a founder in the agricultural AI space, and I went to UC Berkeley" The AI will find an event such as "Deep Farm Tech, Hosts: Former Berkeley Alum X Ventures"
I also plan to adopt this tool for tech week. In terms of a use case that directly applies to a16z/speedrun and this post, this AI TOOL KILLS EVENT SPREADSHEETS and is directly a more efficient way for founders, investors, recruiters, and others to find a16z tech week events.
This tool is just a surface embed for a deeper AI Agent platform that I showed Andrew in person.... The character in the video is a powerful AI Agent that has the ability to help the user control the website it is embedded in.
Bay-Events : A website to find all tech events with AI, powered by Yakimo AI. Any customer can create a site like Bay-Events and power it with Yakimo.
Yakimo AI : A powerful cross surface AI agent persona platform, with memory, integrations, embeds, personalities, custom faces and automatic agent to agent communication.
Here is a Linkedin post for the tool
https://www.linkedin.com/posts/jermaine-dennis-813212a_gtc2026-nvidiagtc-aiagents-activity-7439626039099781120-5PpF
I'm really interested in having this power some TechWeek event discovery.
Spreadsheets are immortal. So no-unless a less exp and more simple product is created.
Before long, the interface for essentially all data analysis will be a light wrapper around an SDM estimator over a large neural network, providing robust uncertainty estimation + interpretability-by-exemplar. This won't be for reasons of jumping on the bandwagon of new tech or trying something cool, but simply because this approach will outstrip the human error of the analyst interacting with the spreadsheet (or other programming language), for both the output AND understanding of the built model, including for non-experts and downstream consumers of the analyses.
In other words, the disruption from neural networks for data analysis will come not just from the raw marginal accuracy, but that the uncertainty over the prediction can be robustly estimated over high-dimensional inputs, and that we can map the provenance of the output back to the data.
This is a more sophisticated version of Garbage In, Garbage Out (GIGO).
Stripping away the jargon, this argument is:
1) The Accuracy: Better math (neural networks) + better data = better answers.
2) The "Uncertainty" part: The AI can tell you, "I'm 80% sure about this number."
3) The "Provenance" part: You can trace the answer back to the specific data points that created it.
You are arguing for a "black box" that is slightly more transparent. However, if the input is garbage, the "robust uncertainty estimation" will just tell you—with great mathematical precision—that the output is also garbage.
Thanks for the follow-up. I'm not sure I follow: If you have a model that alerts the analyst that the output is unreliable when the input is nonsense/garbage, that's a really good thing (and traditionally, has been very difficult behavior to back out from learned models), because then the human user knows to look at the data and not rely on the output. (The alternative where the model gives a highly confident wrong answer over the covariate/distribution-shifted data is highly problematic.)
(Also, just to be clear, this is not the behavior one gets from LM API's today, which do not have this behavior/modeling. The typical chat LM will just hallucinate and proceed to give the analyst highly confident wrong answers, which is a deal breaker for data analysis, except for some narrow cases with a human carefully checking every step, essentially using it as a retrieval system.)
The related way I think about it: The inflection point comes when the LM+estimator is a more reliable estimate of distribution shifts than the analyst can build via linear models (or related) via the typical spreadsheet functions.
There’s a real academic elegance to what you’re describing--the idea of a model with that level of self-awareness is honestly fascinating to think about.
But as a systems engineer who spends my days under the hood of AI architecture, I tend to look at it through a slightly more pragmatic lens. Even with my tendency for taking the scenic route in a conversation, as a cornbread/sweet tea raised Southern boy, when it comes to systems design, I’m a big believer in the shortest path to the truth.
To me, a 'robust uncertainty estimation' feels a bit like a sophisticated check-engine light. It’s certainly a breakthrough to have a model that admits when it’s lost, but in a high-stakes production environment, 'I’m 80% sure we’re off track' doesn't quite get the plane on the ground. We aren't really looking for a model that can report its own confusion; we’re building for a world where the confusion isn't allowed to reach the output in the first place.
At my startup, we’ve found it’s often more reliable to let the LLM be the hands rather than the brain. We use a deterministic engine (like DuckDB) to handle the logic, and let AI write the code to query it. It might not be as poetic as a SDM estimator, but it gives us a 'Glass Box' where a human can audit and own the result (human-in-the-loop).
The real inflection point for me isn't when a model can guess its own errors better than an analyst. It’s when we can stop asking the user to 'trust' a probability score and just give them a tool they can verify. In the trenches, a 20% chance of being wrong is just a $1M mistake waiting to happen. We’d rather give the user the wrench than a weather report.
I’m sure the high-dimensional mapping work is deeply rewarding. However, accuracy is a math problem, and conviction is a UI problem. We’re solving for the latter.
=LLM() still requires grid format.
How about enhancing the experience before completely wiping it away. The end user of excel is generally not the creator, it's the executives that will ask 'what-ifs' and 'what levers can I pull'?
The value is then the user can reliably (key word reliably) update those changes and verify the calculations before a multi-million dollar (or billion) deal is decided on those data parameters.
Has anyone in the thread actually uploaded an Excel spreadsheet into an AI chatbot and asked it questions? AI hallucinates a lot and gets the simplest stuff wrong (e.g. wrong row). It's not ready for prime time, not in the slightest. (investment analyst here)
95% right is still 0% right.
This is what Cybel and Cathex are making a thing of yesterday and you should check them out!
The debate is stuck on UI when the real problem is the logic layer.
Peters is right that front office finance can't accept 95% correctness. But that's an argument for determinism and making logic explicit and auditable, not for keeping it buried in cell references.
The grid can stay. The logic doesn't have to be invisible.