One of the dirty secrets of a lot of these "code adjacent" areas is that they have very little testing.
If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.
If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.
It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.
> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?
I recently watched a demo from a data science guy about the impending proliferation of AI in just about all related fields, his position was highly sceptical but with a "let's make the most of it while we can"
The part that stood out to me which I have repeated to colleagues since, was a demo where the guy fed his tame robot a .csv of price trends for apples and bananas, and asked it to visualise this. Sure enough, out comes a nice looking graph with two jagged lines. Pack it ship it move on..
But then he reveals that, as he wrote the data himself, he knows that both lines should just be an upward trend. Expands the axis labels - the LLM has alphabetized the months but said nothing of it in any of the outputs.
If choosing the "wrong" model, or not wording your prompt in just the right way, is sufficient to not just degrade your output but make it actively misleading and worse than useless, then what does that say about the narrative that all this sort of work is about to be replaced?
I don't recall the bot he was using, it was a rushed portion of the presentation to make the point that "yes these tools exist, but be mindful of the output - they're not a magic wand"
This has had tremendous real world consequences. The European austerity wave of the early 2010s was largely downstream of an excel spreadsheet errors that changed the result of a major study on the impact of debt/gdp.
Any and I mean any statistic someone throws at me I will try and dig in. And if I'm able to, I will usually find that something is very wrong somewhere. As in, the underlying data is usually just wrong, invalidating the whole thing or the data is reasonably sound but the person doing the analysis is making incorrect assumptions about parts of the data and then drawing incorrect conclusions.
It seems to be an ever-present trait of modern business. There is no rigor, probably partly because most business professionals have never learned how to properly approach and analyze data.
Can't tell you how many times I've seen product managers making decisions based on a few hundred analytics events, trying to glean insight where there is none.
What are you optimizing all that code for, it works doesnt it? Dont let perfect be the enemy of good. If it works 80% thats enough, just push it. What is technical debt?
If what you're saying 1) is true and 2) does matter in the success of a business, then wouldn't anyone be able to displace an incumbent trivially by applying a bit of rigor?
I think 1) holds (as my experience matches your cynicism :), but I have a feeling that data minded people tend to overestimate the importance of 2)...
In many experience, many of the statistics these people use doesn't matter in the success of a business --- they are vanity metrics. But people use statistics, and especially the wrong statistics, to pass their agenda. Regardless, it's important to fix the statistics.
Rigor helps for better insights about data. That can help for entrepreneurship.
What also can help for entrepreneurship is having a bias for action. So even if your insights are wrong, if you act and keep acting you will keep acting then you will partially shape reality to your will and bend to its will.
So there are certain forces where you can compensate for your lack of rigor.
The best companies have both of those things by their side.
I've frequently found, over a few decades, that numerical systems are cyclically 'corrected' until results and performance match prior expectations.
There are often more errors. Sometimes the actual results are wildly different in reality to what a model expects .. but the data treatment has been bug hunted until it does what was expected .. and then attention fades away.
This is, unfortunately, a feature of a lot of these systems. The sponsors don’t want truth, they want validation. Generative AI means there don’t even have to be data engineers in the mix to create fake numbers.
> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?
The local statistics office here recently presented salary statistics claiming that teachers' salaries had unexpectedly increased by 50%. All the press releases went out, and it was only questions raised by the public that forced the statistics office to review and correct the data.
> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late.
Back in my data scientist days I used to push for testing and verification of models. Got told off for reducing the teams speed. If the model works well enough to get money in, and the managers that make the final calls do not understand the implications of being wrong, this would be the majority of cases.
I would say that although Claude may hallucinate at least it can be told to test the scripts. Many data scientists will just balk at the idea of testing a crazy excel workbook with lots of formulas that they themselves inherited.
I did a fair about of data analysis and deciding when or if my report was correct was a huge adrenaline rush.
A huge test for me was to have people review my analyses and poke holes. You feel good when your last 50 reports didn’t have a single thing anyone could point out.
I’ve been seeing a lot of people try to build analyses with AI who haven’t been burned with the
“just because it sounds correct doesn’t mean it’s right” dilemma who haven’t realized what it takes before you can stamp your name on an analysis.
If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output? Or maybe you'll be too worried about getting chided for "not being data driven" enough.
If an exec tells an intern or temp to vibecode that thing instead, then you definitely won't have any checkpoints in the process to make sure the human-language prompt describing process was properly turned into the right simulation. But unlike in coding, you don't have a user-facing product that someone can click around in, or send requests to, and verify. Is there a test suite for the giant excel doc? I'm assuming no, maybe I'm wrong.
It feels like it's going to be very hard for anyone working in areas with less black-and-white verifiability or correctness like that sort of financial modeling.