This whole article smells a bit of someone being salty they couldn't sell their software.
Having worked in corporate with vaguely software-buying related stuff, I am confused at why so many small companies think an enterprise would be excited to go with them.
Even if I love your product, how do I pitch to the powers that be that we replace something we are already paying for with this new thing? The company might make billions but I've always had to fight for my budgets.
And tell me again why we should bet our core operations on a two man outfit with six months runway? What happens when you pivot? What happens when our competitor acquires you? What happens when you go on a transatlantic flight and a key expires?
Selling to enterprise early on is a poisoned chalice as well. They have much larger teams, so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions. Not to mention just getting the bills paid can be a huge fight.
It will drag you into their orbit, especially if 80% of your revenue is from a single customer. Soon your other customers will start going to someone who actually have time to care about them. And by then there's been a political shift in-house and the new VP of X gets a quote for an outsourcing bundle from his squash buddy at one of the big system integrators. Your line item gets bundled into this to motivate the cost even though it's not even relevant. And that the end of your company.
If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
> If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
This is the way.
There are back-doors as well. If you can get your software on a pre-approved vendor list in a big consultancy you can by-pass a lot of the song and dance with IT. Companies like Xerox have lists like this. They sign long-term contracts with enterprise customers whose business units can use their part of the budget to get any of the software on the list.
All you have to do from there is market to the right people running those business units.
Selling through the normal IT channels is much harder. It can take 6-9 months of back and forth and you'll still likely get denied more often than not. Enterprises would rather contract with a vendor like SAP, Xerox, Microsoft, etc which is all integrated with their systems already and has the advantage of the Lindy effect in place.
I agree, the risk at the CTO/CIO level is it's four years later, the startup went under, and you have this software integrated into your environment. If you're lucky, someone else will have bought it. They'll on-ramp you to their stack. But then you run the risk of their seeing your as trapped. It's not about how much money you want to pay for the product. It's about extortion.
Or, if you're less lucky, you'll left with software you can't maintain. Even if there's a contract clause that says you get all the yummy, yummy source code. You may not even be able to open source it because you don't own the copyright to some or all of the code. You just have the source code. Good luck with that.
No one gets fired for buying IBM because you know (or at least we once knew) IBM would definitely be around for years to come to support the product. Is it expensive? Yes. Have I found a lot of enterprise products miserable to use? Yes. Does everything have the stink of "well we made it work well enough not to get fired?" Yes. But you won't be getting extorted Broadcom style, or sitting around with 5,000,000 lines of AI generated source that has all sorts of hacks and work around for the four other companies to whom the startup sold their software.
the other side of this is instructive too. we've sold into mid-market accounts and the decision isn't usually 'is this better' but 'what happens to me if this breaks'. the incumbent's main feature isn't functionality, it's someone else's neck on the line if it goes wrong. the winning move for a small SaaS is afaik to get a champion inside who's willing to own that risk personally, and make sure they look very good when it works.
> so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions
There is nothing like being on a call when the product isn't working right and the customer has 28 people from their side and only 2 of them know anything about the subject, but 26 of them have very strong conflicting opinions.
> There is nothing like being on a call when the product isn't working right and the customer has 28 people from their side and only 2 of them know anything about the subject, but 26 of them have very strong conflicting opinions.
This is no problem per se if you, for example, don't score too high on the Agreeableness dimension in the five-factor model. [1] :-)
The problem rather is that the people who by their personality traits can tolerate such situations quite well are very often not the kind of people that customers want as support contact persons and vice versa.
Heh, I'm agreeable when (at least I) think you are doing the right thing, and get short pretty quick when I feel like you're wasting my time.
I've learned a number of strategies over time dealing with crap like the above. Typically it's getting a manager/customer service on the call with the large group and taking the people I want on the call off to a separate call where things actually happen.
Most of the time we'll have things fixed, or at least a plan for a solution done before the big group has got past anything at all.
And if you really must target enterprise customers, then it might be better for an SMB to pitch Design, Build, and Operate consulting engagements rather than traditional finished software products or services.
Or perhaps even partner with a larger consultancy who could be relied upon for the "operate" phase, leaving to you concentrate on the (generally more interesting) design & build parts.
The moral of this story is: it is human nature that when we have something, we do not want to lose it. This is an entirely different paradigm between what we do when we do not have something. It explains why the wealthy are so toxic. Their only goal in life is not to lose what they have.
I worked at a well respected technical company and was given the task of evaluating a small company that we could acquire. I looked at the technology -something anyone could put together in a day. I looked at the business model. It was that you get free storage if you get a friend to sign up for free storage!!
I told the company that it had no technology and a business model that made no sense. They bought the company. Why? Because the target company told them that other companies were interested - and they were.
They did not want to miss the boat and lose what they had. Nothing came from this acquired company. Meanwhile the fundamental technology was disrupted by something new and the company fell apart. End of story. This is common.
So AI? This is about not missing the boat. Someplace, somewhere there is value in AI, but for now, if you have missed the boat you are probably better off. So no, this is not (as the current top comment says) about "they couldn't sell their software". This is about a very real reason why companies try to not miss the boat rather than innovate.
[ASIDE] And I cannot help but laugh at the Clojure reference with the statement "two things are simple if they are not intertwined". I have always been interested in Clojure, but I never go there because it is not "simple". It is intertwined with Java which I know all to well and do not love. Java was the language of choice at this same company and I wasted too many months of my life bowing before that cumbersome language.
Commenting on the aside: that was my first reaction as well (years ago). But really you can treat it mostly as having a mature runtime and freebies and get a lot out of the language. Many who use and like Clojure, don’t necessarily like Java the language, or have similar reservations like you.
"When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been - an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases."
Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.
And they're very sensitive to new releases, often making it difficult to work with after a major release of a framework for example. Tripping up on minor stuff like new functions, changes in signatures etc.
A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.
Familiarity matters to some degree. But there are diminishing returns I think.
Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.
Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.
Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".
> Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
Counterexample: the Wolfram programming language (by many people rather known from the Mathematica computer algebra system).
It is incredibly mature and takes stability very seriously, but in my experience LLMs tend to hallucinate a lot when you ask them to write Wolfram or Mathematica code.
I see the reason in two points:
1. There exists less Wolfram/Mathematica code online than for many other popular programming languages.
2. Code in Wolfram is often very concise; thus it is less forgiving with respect to "somewhat correct" code (which is in my opinion mostly a good thing), thus LLM often tend to struggle writing Wolfram/Mathematica code.
Yes I'd agree from the perspective of the model that one cohesive well established language would be more reliable. The nightmare scenario is an enterprise suite with a Hodge podge mix of every language known to man all mangled together because the frontier model at the time decided Haskel would be the most efficient when compiled for webassembly and some poor intern has to fix a bug that should cost 100x less than rerunning the LLM to patch.
> The language they do best with is the one with the largest corpus in the training set.
Up to a point, I guess? There must be a point of diminishing returns based on the expressiveness of the language
I mean, a language that has 8 different ways to declare + initialise composite variables needs to have a much larger training corpus than a language that has only 2 or 3 different ways.
The more expressive a language, the more different suitable patterns would be required, which results in a larger corpus being needed.
I spent about two hours last night trying to get a consistent and accurate answer out of Claude regarding a set of graphics APIs. I then went the old fashioned way to find most of the articles outside of a couple of sources were also incorrect API slop. I can't override methods that don't exist and never have existed in an API, but that's what the clankers have latched on to.
Just before that, at work, I found a bug in an AI driven refactor of code. For some reason, both the original refactor and the ai driven autocomplete kept trying to send the wrong parameters to a function. It was determined to get it wrong, even after I manually fixed it. [Edit - I should also mention the AI driven code review agent tried to do the same thing. The clankers are consistent.]
This is why familiar language matters. Because at some point, you'll have bugs that the AI can't fix. And by the way, I use LLM tools at work and have a set of skills that improve my productivity, if not my QoL. But I still need to be able to dive into the language, the build tools, and fix things.
SRE here. Blog author seems to not understand the business side of the house which is concerning.
Companies pick Java or .Net because hiring developers is easy, which business side loves, and a lot of business development work is not rocket science. It's taking business logic and implementing in code.
> Companies pick Java or .Net because hiring developers is easy, which business side loves
Instead of giving a counter-argument, I'll link to a parallel discussion thread concerning "hiring developers for programming language X is easy": https://news.ycombinator.com/item?id=47888298
> a lot of business development work is not rocket science. It's taking business logic and implementing in code.
In my experience (and I claim that I am rather sitting at the source), it is rather that developers who implement business logic are typically actively held back or prevented from inventing smart solution for the problems that the company has - even if these (very often) would be very helpful for the company.
In the area of implementation of business logic, thus the tall poppy syndrome [1] is very prevalent: you are very hinted not to think of innovative solution, but to be a good worker bee. This is why in my opinion implementation of line-of-business applications is frowned upon by many good programmers, and not because the questions that you are involved with are "boring" (they are not!).
Jane Street is always a bad example since they are working on niche problems that few people experience.
Sure, Tall Poppy happens because A) It's human nature and B) Companies don't want unusual poppy size, they want same size so when it's time to harvest some of them, they can just quickly cut.
> Jane Street is always a bad example since they are working on niche problems that few people experience.
Surprisingly (?), in my experience in a lot of industries people (or more specifically: programmers who develop internal software for this industry) work on problems that are incredibly niche outside this industry, and thus incredibly few people ever experience.
This article seems to fundamentally misunderstand what 'enterprise IT' is all about (enterprise IT being different from IT for a tech-native).
IT is a highly dynamic system, and enterprises optimize for a minimal set of capabilities at the maximum level of abstraction under high levels of uncertainty and different inherited states.
This results in decisions that may not appear technically optimal but which are still an optimal outcome under the extreme uncertainty that an 'enterprise' operates in vis a vis technology paradigms.
Add to this that there is no one technology operating model. everyone has a different starting point, different inherited technical debt. They are optimizing to their own starting point, not a clean slate.
This is what people don't get about what Microsoft actually does - it abstracts both at the technical level and the operational (contracting) level. This is valuable for an organization whose core competency is not technology, even if it does not lead to the most optimal outcomes from a pure technology perspective.
Yeah, "Nobody every got fired for purchasing IBM"... a story as old as time itself.
But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.
The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.
A fine needle to thread for the upstart, but better than standing outside the game.
This is still true today. Gartner makes a living out of it. Always prefer buying the "familiar" product rather than being successful with the right solution.
Fortunately history show that those who do their math right actually end up being extremely successful: Google using linux HW for their DB servers, AWS developing their own network equipment and protocols, etc. It takes guts but when it works it leaves competition years behind.
> “…the buyer bought what was familiar to them, not what was right.”
This friction, and the lead dividing solutions from consulting, gave me an idea—-they’re describing conditions where LLM revolution might track with the desktop revolution. Companies, groups within companies and small businesses will DIY it and say good enough.
Except not really when big enterprise needs another party to hold blame and prove compliance to regulations and standards to auditors and customers.
When you hire a big company like Microsoft to handle some enterprise function of your business, you have someone who is already certified in whatever regulatory thing you need, and you have someone big enough to sue if they mess up.
I can vibecode Google Drive in a weekend but I can’t vibecode their HIPAA compliance and various certifications.
Really good analysis, but misses the most important element, that the incentives of the humans in the loop are not aligned with whats best for the company. The people who make purchasing decisions are all MBAs from top tier schools, the only reason they pay 100K - 200K for MBA is to become part of that network. Enterprises are infested with these MBAs. These people buy/sell software (and anything else) from each other. High dollar contracts means a bigger title, more compensation, promotions, etc. This is just human nature.
> Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier because the signal to noise ratio in the OCaml community is so much better than other, more approachable languages.
I saw a YouTube vidoe years ago that featured Yaron Minsky. He made similar points. In short, some programming languages are like catnip for excellent programmers.
>In short, some programming languages are like catnip for excellent programmers.
I think that misses the point.
Things that are hard have a higher percentage of people who don't need it to be easy.
If you're a "good" programmer you don't need the "community support" (i.e. a bunch of stuff to tell you why you should do things one way or the other in your particular language) so you're free to choose niche languages based on other factors and in turn there will be more good programmers programming in those languages.
You see this in all sorts of subjects not just programming.
> Hence what, for lack of a better name, I'll call the Python paradox: if a company chooses to write its software in a comparatively esoteric language, they'll be able to hire better programmers, because they'll attract only those who cared enough to learn it.
It also helps that Jane Street has like 3k employees, a good chunk of whom never touch code at all, and of those that do, a good chunk who won't be touching OCaml. Hundreds of OCaml programmers though, yes.
That may not scale for larger companies.
Also important to note, they don't require you to know OCaml when you get the job. They will teach you OCaml.
All that said, man it would be cool to work for JS (or anyone really) and write OCaml.
The core insight that enterprises select products on familiarity over anything else, is valuable. I’m going to keep it in mind for future customer engagements.
Yeah I don't quite get his point here. He seems to be complaining that enterprise companies buy from other enterprise a d larger companies instead of him. It's a tale as old as time.
Enterprise buy from large companies because those large companies come with support teams, liability, expertise that you don't need to manage internally.
It rare I read an article that actively annoys me but there's something about how this is written that seems a little arrogant.
Understood that this is a pitch for his own platform (which is fair enough), there is a mixture of a few things here which are common tech tropes.
- Enterprise buyers are risk averse and buy the wrong thing
- Language X is better because the people that use it are smarter
- New tech is difficult for established players
Not really a fresh take but at least it's well written.
Imo, there is a real question about the value of better here. Also, the ability and likelihood of the enterprise to actually leverage better.
This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.
Eh, it's skipped in "the enemy" section an important bit, that was spelled out in the intro by the buyer, and wasn't listened: if the small vendor goes bust, who maintains the system after? if you plan for in 10 year cycles, greenfield buys look scary
That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.
It's an interesting problem for small businesses that want to sell stuff that will be used and relied on for a very long time.
In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.
Easy solution: make your software look like a familiar turd. Make it look like a crappy dBase III application that's been rolled over to a modernized UI.
A strong appetite for familiarity implies a desire for avoiding effort. Effort - thinking, negotiating, planning, testing. Effort is cost.
The author has a new thing which is different - unfamiliar - and ostensibly better. To a customer, when is a claim for better credible, and when does better really better? How does better measure up as benefit?
The challenge for any product story is to a) illuminate the need - why is the status quo intolerable and b) communicate the benefit tangibly to your audience. That the audience thinks your new thing is worth the effort depends on them understanding the new thing, feeling the need, and feeling good about the effort needed to exploit your thing. You'd like to get to your customer saying "I want that".
I think the specific question for axonlore.com is communicating benefit - how does it impact whatever workflows it serves? The website is a "thing" story, vs a benefit story in my view. I like "enterprise intelligence" as a thing, but it's a tough product. It inevitably implies culture change, and in the decision making space, the key people think they are intelligent enough already -- they want to scale themselves. Someone mentioned "better RAG" - maybe the story is how agents can perform better and more cost effectively. I am not clear that "the market" knows that it needs that yet.
I don't think "familiarity" is the right framing. Application automation, or workflow automation, or whatever the enteprise framing is of agentic solution generation, is to me a question of variance and effort. Variance in the quality of a work product and the net effort to produce it. Variance is the complement to familiar.
- low variance / high effort: demonstrated need for precision and or reliability
- high variance / high effort: when there seems like potential huge upside, or existential risk.
From an IT perspective, enterprise status quo is towards low variance/high effort. The market "want" here now with "agentic" seems to the benefit of low variance/low effort solutions ... where, in enterprise, getting an adequate solution is no longer gated on negotiating with or relying on IT or dev. Ultimately, I think enterprises want low variance, low effort operations -- customers of enterprise customers pay for low variance. I think an Agentic-IT solution question will be how confidently can one iterate and converge to that from whatever is delivered in the first pass. What's the ultimate effort of getting something "right enough".
> an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases.
Huh? All current and previous-gen models are most effective when coding in languages with the most test data.
While I agree the newest frontier model may be smart enough to reason at a lower level and be agnostic but its “relatively dumber / less capable” forebears .. need lots of examples to pattern match from.
> And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.
As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.
> Enterprise knowledge has always been as much a human problem as a technology one. Nobody wants to do the structuring work, and every prior architecture demanded that somebody do the structuring work rather than their actual job
This is correct and very agreeable to everyone, but then after some waffle they then write this:
> Structure, for the first time, can be produced from content instead of demanded from people
These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.
I think the explanation comes later in the article:
"It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."
And
"It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."
> The category error under all of this is the assumption that you can take a document library or a wiki [...] and make it intelligent by attaching a language model to it. But you cannot.
Imagine a model with a reliable 100M context window. Then all of a sudden you can.
> The information the intelligent answer needs was never in the wiki in the first place.
This whole article smells a bit of someone being salty they couldn't sell their software.
Having worked in corporate with vaguely software-buying related stuff, I am confused at why so many small companies think an enterprise would be excited to go with them.
Even if I love your product, how do I pitch to the powers that be that we replace something we are already paying for with this new thing? The company might make billions but I've always had to fight for my budgets.
And tell me again why we should bet our core operations on a two man outfit with six months runway? What happens when you pivot? What happens when our competitor acquires you? What happens when you go on a transatlantic flight and a key expires?
Selling to enterprise early on is a poisoned chalice as well. They have much larger teams, so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions. Not to mention just getting the bills paid can be a huge fight.
It will drag you into their orbit, especially if 80% of your revenue is from a single customer. Soon your other customers will start going to someone who actually have time to care about them. And by then there's been a political shift in-house and the new VP of X gets a quote for an outsourcing bundle from his squash buddy at one of the big system integrators. Your line item gets bundled into this to motivate the cost even though it's not even relevant. And that the end of your company.
If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
> If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
This is the way.
There are back-doors as well. If you can get your software on a pre-approved vendor list in a big consultancy you can by-pass a lot of the song and dance with IT. Companies like Xerox have lists like this. They sign long-term contracts with enterprise customers whose business units can use their part of the budget to get any of the software on the list.
All you have to do from there is market to the right people running those business units.
Selling through the normal IT channels is much harder. It can take 6-9 months of back and forth and you'll still likely get denied more often than not. Enterprises would rather contract with a vendor like SAP, Xerox, Microsoft, etc which is all integrated with their systems already and has the advantage of the Lindy effect in place.
I agree, the risk at the CTO/CIO level is it's four years later, the startup went under, and you have this software integrated into your environment. If you're lucky, someone else will have bought it. They'll on-ramp you to their stack. But then you run the risk of their seeing your as trapped. It's not about how much money you want to pay for the product. It's about extortion.
Or, if you're less lucky, you'll left with software you can't maintain. Even if there's a contract clause that says you get all the yummy, yummy source code. You may not even be able to open source it because you don't own the copyright to some or all of the code. You just have the source code. Good luck with that.
No one gets fired for buying IBM because you know (or at least we once knew) IBM would definitely be around for years to come to support the product. Is it expensive? Yes. Have I found a lot of enterprise products miserable to use? Yes. Does everything have the stink of "well we made it work well enough not to get fired?" Yes. But you won't be getting extorted Broadcom style, or sitting around with 5,000,000 lines of AI generated source that has all sorts of hacks and work around for the four other companies to whom the startup sold their software.
"We use companies big enough to buy us, or small enough we can buy" was a rule of thumb at one enterprise I dealt with.
the other side of this is instructive too. we've sold into mid-market accounts and the decision isn't usually 'is this better' but 'what happens to me if this breaks'. the incumbent's main feature isn't functionality, it's someone else's neck on the line if it goes wrong. the winning move for a small SaaS is afaik to get a champion inside who's willing to own that risk personally, and make sure they look very good when it works.
> so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions
There is nothing like being on a call when the product isn't working right and the customer has 28 people from their side and only 2 of them know anything about the subject, but 26 of them have very strong conflicting opinions.
> There is nothing like being on a call when the product isn't working right and the customer has 28 people from their side and only 2 of them know anything about the subject, but 26 of them have very strong conflicting opinions.
This is no problem per se if you, for example, don't score too high on the Agreeableness dimension in the five-factor model. [1] :-)
The problem rather is that the people who by their personality traits can tolerate such situations quite well are very often not the kind of people that customers want as support contact persons and vice versa.
[1] https://en.wikipedia.org/wiki/Big_Five_personality_traits
Heh, I'm agreeable when (at least I) think you are doing the right thing, and get short pretty quick when I feel like you're wasting my time.
I've learned a number of strategies over time dealing with crap like the above. Typically it's getting a manager/customer service on the call with the large group and taking the people I want on the call off to a separate call where things actually happen.
Most of the time we'll have things fixed, or at least a plan for a solution done before the big group has got past anything at all.
And if you really must target enterprise customers, then it might be better for an SMB to pitch Design, Build, and Operate consulting engagements rather than traditional finished software products or services.
Or perhaps even partner with a larger consultancy who could be relied upon for the "operate" phase, leaving to you concentrate on the (generally more interesting) design & build parts.
The moral of this story is: it is human nature that when we have something, we do not want to lose it. This is an entirely different paradigm between what we do when we do not have something. It explains why the wealthy are so toxic. Their only goal in life is not to lose what they have.
I worked at a well respected technical company and was given the task of evaluating a small company that we could acquire. I looked at the technology -something anyone could put together in a day. I looked at the business model. It was that you get free storage if you get a friend to sign up for free storage!!
I told the company that it had no technology and a business model that made no sense. They bought the company. Why? Because the target company told them that other companies were interested - and they were.
They did not want to miss the boat and lose what they had. Nothing came from this acquired company. Meanwhile the fundamental technology was disrupted by something new and the company fell apart. End of story. This is common.
So AI? This is about not missing the boat. Someplace, somewhere there is value in AI, but for now, if you have missed the boat you are probably better off. So no, this is not (as the current top comment says) about "they couldn't sell their software". This is about a very real reason why companies try to not miss the boat rather than innovate.
[ASIDE] And I cannot help but laugh at the Clojure reference with the statement "two things are simple if they are not intertwined". I have always been interested in Clojure, but I never go there because it is not "simple". It is intertwined with Java which I know all to well and do not love. Java was the language of choice at this same company and I wasted too many months of my life bowing before that cumbersome language.
Commenting on the aside: that was my first reaction as well (years ago). But really you can treat it mostly as having a mature runtime and freebies and get a lot out of the language. Many who use and like Clojure, don’t necessarily like Java the language, or have similar reservations like you.
"When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been - an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases."
Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.
And they're very sensitive to new releases, often making it difficult to work with after a major release of a framework for example. Tripping up on minor stuff like new functions, changes in signatures etc.
A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.
Familiarity matters to some degree. But there are diminishing returns I think.
Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.
Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.
Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".
> Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
Counterexample: the Wolfram programming language (by many people rather known from the Mathematica computer algebra system).
It is incredibly mature and takes stability very seriously, but in my experience LLMs tend to hallucinate a lot when you ask them to write Wolfram or Mathematica code.
I see the reason in two points:
1. There exists less Wolfram/Mathematica code online than for many other popular programming languages.
2. Code in Wolfram is often very concise; thus it is less forgiving with respect to "somewhat correct" code (which is in my opinion mostly a good thing), thus LLM often tend to struggle writing Wolfram/Mathematica code.
Yes I'd agree from the perspective of the model that one cohesive well established language would be more reliable. The nightmare scenario is an enterprise suite with a Hodge podge mix of every language known to man all mangled together because the frontier model at the time decided Haskel would be the most efficient when compiled for webassembly and some poor intern has to fix a bug that should cost 100x less than rerunning the LLM to patch.
> The language they do best with is the one with the largest corpus in the training set.
Up to a point, I guess? There must be a point of diminishing returns based on the expressiveness of the language
I mean, a language that has 8 different ways to declare + initialise composite variables needs to have a much larger training corpus than a language that has only 2 or 3 different ways.
The more expressive a language, the more different suitable patterns would be required, which results in a larger corpus being needed.
I spent about two hours last night trying to get a consistent and accurate answer out of Claude regarding a set of graphics APIs. I then went the old fashioned way to find most of the articles outside of a couple of sources were also incorrect API slop. I can't override methods that don't exist and never have existed in an API, but that's what the clankers have latched on to.
Just before that, at work, I found a bug in an AI driven refactor of code. For some reason, both the original refactor and the ai driven autocomplete kept trying to send the wrong parameters to a function. It was determined to get it wrong, even after I manually fixed it. [Edit - I should also mention the AI driven code review agent tried to do the same thing. The clankers are consistent.]
This is why familiar language matters. Because at some point, you'll have bugs that the AI can't fix. And by the way, I use LLM tools at work and have a set of skills that improve my productivity, if not my QoL. But I still need to be able to dive into the language, the build tools, and fix things.
> The language they do best with is the one with the largest corpus in the training set.
Not the case, strangely. They do best with Elixir. https://arxiv.org/pdf/2508.09101
SRE here. Blog author seems to not understand the business side of the house which is concerning.
Companies pick Java or .Net because hiring developers is easy, which business side loves, and a lot of business development work is not rocket science. It's taking business logic and implementing in code.
I recommend this blog article to understand the logic behind Java but it applies to other technologies in question. https://gist.githubusercontent.com/terryjbates/3fcab7b07a0c5...
That link displays as raw content, maybe [1] is kinder, and is rooted on the original author's blog.
[1] https://sasamat.xen.prgmr.com/michaelochurch/wp/?p=881
I didn't want to link off to random site.
> Companies pick Java or .Net because hiring developers is easy, which business side loves
Instead of giving a counter-argument, I'll link to a parallel discussion thread concerning "hiring developers for programming language X is easy": https://news.ycombinator.com/item?id=47888298
> a lot of business development work is not rocket science. It's taking business logic and implementing in code.
In my experience (and I claim that I am rather sitting at the source), it is rather that developers who implement business logic are typically actively held back or prevented from inventing smart solution for the problems that the company has - even if these (very often) would be very helpful for the company.
In the area of implementation of business logic, thus the tall poppy syndrome [1] is very prevalent: you are very hinted not to think of innovative solution, but to be a good worker bee. This is why in my opinion implementation of line-of-business applications is frowned upon by many good programmers, and not because the questions that you are involved with are "boring" (they are not!).
[1] https://en.wikipedia.org/wiki/Tall_poppy_syndrome
Jane Street is always a bad example since they are working on niche problems that few people experience.
Sure, Tall Poppy happens because A) It's human nature and B) Companies don't want unusual poppy size, they want same size so when it's time to harvest some of them, they can just quickly cut.
> Jane Street is always a bad example since they are working on niche problems that few people experience.
Surprisingly (?), in my experience in a lot of industries people (or more specifically: programmers who develop internal software for this industry) work on problems that are incredibly niche outside this industry, and thus incredibly few people ever experience.
For anyone looking for the original - https://web.archive.org/web/20120504065429/http://michaeloch...
(it was easier for me to use in reader mode because it didn't obliterate spacing between words)
This article seems to fundamentally misunderstand what 'enterprise IT' is all about (enterprise IT being different from IT for a tech-native).
IT is a highly dynamic system, and enterprises optimize for a minimal set of capabilities at the maximum level of abstraction under high levels of uncertainty and different inherited states.
This results in decisions that may not appear technically optimal but which are still an optimal outcome under the extreme uncertainty that an 'enterprise' operates in vis a vis technology paradigms.
Add to this that there is no one technology operating model. everyone has a different starting point, different inherited technical debt. They are optimizing to their own starting point, not a clean slate.
This is what people don't get about what Microsoft actually does - it abstracts both at the technical level and the operational (contracting) level. This is valuable for an organization whose core competency is not technology, even if it does not lead to the most optimal outcomes from a pure technology perspective.
Yeah, "Nobody every got fired for purchasing IBM"... a story as old as time itself.
But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.
The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.
A fine needle to thread for the upstart, but better than standing outside the game.
"Nobody gets fired for buying IBM"
This is still true today. Gartner makes a living out of it. Always prefer buying the "familiar" product rather than being successful with the right solution.
Fortunately history show that those who do their math right actually end up being extremely successful: Google using linux HW for their DB servers, AWS developing their own network equipment and protocols, etc. It takes guts but when it works it leaves competition years behind.
Well, in Quebec, the driver's insurance agency (SAAQ) decided to go with SAP and the major bosses were fired.
The cost of the migration was supposed to be 500millions $ and it's now estimated at 1.1 billion $.
But, they weren't fired because of SAP, they were fired because they lied to the government about the true cost.
> The wiki is not the thing you add AI to. The wiki is the thing AI replaces.
HN discussions seems to miss this. What LLMs are before you use them for agentic something is a lossy compression of a large text corpus.
The original wikis have to survive so you can have access to the non lossy version though.
> “…the buyer bought what was familiar to them, not what was right.”
This friction, and the lead dividing solutions from consulting, gave me an idea—-they’re describing conditions where LLM revolution might track with the desktop revolution. Companies, groups within companies and small businesses will DIY it and say good enough.
Except not really when big enterprise needs another party to hold blame and prove compliance to regulations and standards to auditors and customers.
When you hire a big company like Microsoft to handle some enterprise function of your business, you have someone who is already certified in whatever regulatory thing you need, and you have someone big enough to sue if they mess up.
I can vibecode Google Drive in a weekend but I can’t vibecode their HIPAA compliance and various certifications.
Really good analysis, but misses the most important element, that the incentives of the humans in the loop are not aligned with whats best for the company. The people who make purchasing decisions are all MBAs from top tier schools, the only reason they pay 100K - 200K for MBA is to become part of that network. Enterprises are infested with these MBAs. These people buy/sell software (and anything else) from each other. High dollar contracts means a bigger title, more compensation, promotions, etc. This is just human nature.
It makes me think about this HN comment: https://news.ycombinator.com/item?id=11933250
I saw a YouTube vidoe years ago that featured Yaron Minsky. He made similar points. In short, some programming languages are like catnip for excellent programmers.
>In short, some programming languages are like catnip for excellent programmers.
I think that misses the point.
Things that are hard have a higher percentage of people who don't need it to be easy.
If you're a "good" programmer you don't need the "community support" (i.e. a bunch of stuff to tell you why you should do things one way or the other in your particular language) so you're free to choose niche languages based on other factors and in turn there will be more good programmers programming in those languages.
You see this in all sorts of subjects not just programming.
PG wrote about this back in 2004: https://www.paulgraham.com/pypar.html
> Hence what, for lack of a better name, I'll call the Python paradox: if a company chooses to write its software in a comparatively esoteric language, they'll be able to hire better programmers, because they'll attract only those who cared enough to learn it.
It also helps that Jane Street has like 3k employees, a good chunk of whom never touch code at all, and of those that do, a good chunk who won't be touching OCaml. Hundreds of OCaml programmers though, yes.
That may not scale for larger companies.
Also important to note, they don't require you to know OCaml when you get the job. They will teach you OCaml.
All that said, man it would be cool to work for JS (or anyone really) and write OCaml.
The core insight that enterprises select products on familiarity over anything else, is valuable. I’m going to keep it in mind for future customer engagements.
For context this is the authors website. https://axonlore.com/
So where its fair to say enterprise users buy safety, if he's referring to his own product I would offer the following.
He's in the AI tool space i.e. a better rag. So you're selling to AI developers and developers nearly always go open source first.
If they can't find an open source solution or if they don't even look, they prefer to build it themselves.
For this kind of product most enterprise buyers won't understand its benefits, you have to get the developers interested first.
And finally, in this market, you are 1 prompt away from someone cloning your whole business and calling it openaxon or something like that.
It's a tough time to be a software startup.
> The category has never once, in sixty years, produced a product that reliably made good
In the same article the author was mentioning a few expert systems from the past that were quite obviously successful.
> on the promise printed on its marketing
Ah, _that_ promise. That promise is never fulfilled anywhere nor it is expected to.
Yeah I don't quite get his point here. He seems to be complaining that enterprise companies buy from other enterprise a d larger companies instead of him. It's a tale as old as time.
Enterprise buy from large companies because those large companies come with support teams, liability, expertise that you don't need to manage internally.
It rare I read an article that actively annoys me but there's something about how this is written that seems a little arrogant.
> seems a little arrogant
A little. But it's a nice article nevertheless.
That's just human nature, to prefer what's familiar.
The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.
Not just enterprise, any human organisation.
Also sunk costs “should in theory” never be considered but I’ve only ever seen sunk costs considered.
Understood that this is a pitch for his own platform (which is fair enough), there is a mixture of a few things here which are common tech tropes.
- Enterprise buyers are risk averse and buy the wrong thing - Language X is better because the people that use it are smarter - New tech is difficult for established players
Not really a fresh take but at least it's well written.
Imo, there is a real question about the value of better here. Also, the ability and likelihood of the enterprise to actually leverage better.
This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.
Do these enterprises actually need "good?"
Eh, it's skipped in "the enemy" section an important bit, that was spelled out in the intro by the buyer, and wasn't listened: if the small vendor goes bust, who maintains the system after? if you plan for in 10 year cycles, greenfield buys look scary
That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.
But no, corporate is bad, I guess.
It's an interesting problem for small businesses that want to sell stuff that will be used and relied on for a very long time.
In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.
This is just an ad
Easy solution: make your software look like a familiar turd. Make it look like a crappy dBase III application that's been rolled over to a modernized UI.
> your system is not an intelligence tool, it is a compression primitive with a chat interface on top
One should not underestimate a "compression primitive with a chat interface". For certain tasks it is a superpower.
A strong appetite for familiarity implies a desire for avoiding effort. Effort - thinking, negotiating, planning, testing. Effort is cost.
The author has a new thing which is different - unfamiliar - and ostensibly better. To a customer, when is a claim for better credible, and when does better really better? How does better measure up as benefit?
The challenge for any product story is to a) illuminate the need - why is the status quo intolerable and b) communicate the benefit tangibly to your audience. That the audience thinks your new thing is worth the effort depends on them understanding the new thing, feeling the need, and feeling good about the effort needed to exploit your thing. You'd like to get to your customer saying "I want that".
I think the specific question for axonlore.com is communicating benefit - how does it impact whatever workflows it serves? The website is a "thing" story, vs a benefit story in my view. I like "enterprise intelligence" as a thing, but it's a tough product. It inevitably implies culture change, and in the decision making space, the key people think they are intelligent enough already -- they want to scale themselves. Someone mentioned "better RAG" - maybe the story is how agents can perform better and more cost effectively. I am not clear that "the market" knows that it needs that yet.
I don't think "familiarity" is the right framing. Application automation, or workflow automation, or whatever the enteprise framing is of agentic solution generation, is to me a question of variance and effort. Variance in the quality of a work product and the net effort to produce it. Variance is the complement to familiar.
- high variance / low effort: prototypes
- low variance / low effort: automating anything repetitive and complicated
- low variance / high effort: demonstrated need for precision and or reliability
- high variance / high effort: when there seems like potential huge upside, or existential risk.
From an IT perspective, enterprise status quo is towards low variance/high effort. The market "want" here now with "agentic" seems to the benefit of low variance/low effort solutions ... where, in enterprise, getting an adequate solution is no longer gated on negotiating with or relying on IT or dev. Ultimately, I think enterprises want low variance, low effort operations -- customers of enterprise customers pay for low variance. I think an Agentic-IT solution question will be how confidently can one iterate and converge to that from whatever is delivered in the first pass. What's the ultimate effort of getting something "right enough".
> an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases.
Huh? All current and previous-gen models are most effective when coding in languages with the most test data.
While I agree the newest frontier model may be smart enough to reason at a lower level and be agnostic but its “relatively dumber / less capable” forebears .. need lots of examples to pattern match from.
Familiarity once again!
> And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.
As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.
> Enterprise knowledge has always been as much a human problem as a technology one. Nobody wants to do the structuring work, and every prior architecture demanded that somebody do the structuring work rather than their actual job
This is correct and very agreeable to everyone, but then after some waffle they then write this:
> Structure, for the first time, can be produced from content instead of demanded from people
These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.
I think the explanation comes later in the article:
"It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."
And
"It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."
> The category error under all of this is the assumption that you can take a document library or a wiki [...] and make it intelligent by attaching a language model to it. But you cannot.
Imagine a model with a reliable 100M context window. Then all of a sudden you can.
> The information the intelligent answer needs was never in the wiki in the first place.
Oh well.