Lynx Roundup, December 4th 2020

Lynx Roundup, December 4th 2020

Awesome Lisp Languages! Visualizing Neural Networks with the Grand Tour! Common machine learning programming errors in Python!

Matthew Alhonte
Matthew Alhonte
A list of Lisp-flavored programming languages. Contribute to dundalek/awesome-lisp-languages development by creating an account on GitHub.
To Make the Perfect Mirror, Physicists Confront the Mystery of Glass
Sometimes a mirror that reflects 99.9999% of light isn’t good enough.
Visualizing Neural Networks with the Grand Tour
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks.
A visual debugger for Jupyter
Most of the progress made in software projects comes from incrementalism. The ability to quickly see the outcome of an execution and iterate has been one of the main reasons for the success of…
Common Machine Learning Programming Errors in Python
In this post I will go over some of the most common errors I come across in python during the model building and development process. For demonstration purposes we will use height/weight data which…

Andreessen-Horowitz has always been the most levelheaded of the major current year VC firms. While other firms were levering up on “cleantech” and nonsensical biotech startups that violate physical law, they quietly continued to invest in sane companies (also hot garbage bugman products like soylent).  I assume they actually listen to people on the front lines, rather than what their VC pals are telling them. Maybe they’re just smarter than everyone else; definitely more independent minded. Their recent review on how “AI” differs from software company investments is absolutely brutal. I am pretty sure most people didn’t get the point, so I’ll quote it emphasizing the important bits.

They use all the buzzwords (my personal bete-noir; the term “AI” when they mean “machine learning”), but they’ve finally publicly noticed certain things which are abundantly obvious to anyone who works in the field. For example, gross margins are low for deep learning startups that use “cloud” compute. Mostly because they use cloud compute.


Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies 🏭

In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.

AI, it turns out, is pretty demanding:

  • Training a single AI model can cost hundreds of thousands of dollars (or more) in compute resources. While it’s tempting to treat this as a one-time cost, retraining is increasingly recognized as an ongoing cost, since the data that feeds AI models tends to change over time (a phenomenon known as “data drift”).
  • Model inference (the process of generating predictions in production) is also more computationally complex than operating traditional software. Executing a long series of matrix multiplications just requires more math than, for example, reading from a database.
  • AI applications are more likely than traditional software to operate on rich media like images, audio, or video. These types of data consume higher than usual storage resources, are expensive to process, and often suffer from region of interest issues – an application may need to process a large file to find a small, relevant snippet.
  • We’ve had AI companies tell us that cloud operations can be more complex and costly than traditional approaches, particularly because there aren’t good tools to scale AI models globally. As a result, some AI companies have to routinely transfer trained models across cloud regions – racking up big ingress and egress costs – to improve reliability, latency, and compliance.

Taken together, these forces contribute to the 25% or more of revenue that AI companies often spend on cloud resources. In extreme cases, startups tackling particularly complex tasks have actually found manual data processing cheaper than executing a trained model.

This is something which is true of pretty much all machine learning with heavy compute and data problems. The pricing structure of “cloud” bullshit is designed to extract maximum blood from people with heavy data or compute requirements. Cloud companies would prefer to sell the time on a piece of hardware to 5 or 10 customers. If you’re lucky enough to have a startup that runs on a few million rows worth of data and a GBM or Random Forest, it’s probably not true at all, but precious few startups are so lucky. Those who use the latest DL woo on the huge data sets they require will have huge compute bills unless they buy their own hardware. For reasons that make no sense to me, most of them don’t buy hardware.

In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.

Beyond what they’re saying about the size of Deep Learning models which is doubtless true for interesting new results, admitting that the computational power of GPU chips hasn’t exactly been growing apace is something rarely heard (though more often lately). Everyone thinks Moore’s law will save us. NVIDIA actually does have obvious performance improvements that could be made, but the scale of things is such that the only way to grow significantly bigger models is by lining up more GPUs. Doing this in a “cloud” you’re renting from a profit making company is financial suicide.


Gross Margins, Part 2: Many AI applications rely on “humans in the loop” to function at a high level of accuracy 👷

Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.

First: training most of today’s state-of-the-art AI models involves the manual cleaning and labeling of large datasets. This process is laborious, expensive, and among the biggest barriers to more widespread adoption of AI. Plus, as we discussed above, training doesn’t end once a model is deployed. To maintain accuracy, new training data needs to be continually captured, labeled, and fed back into the system. Although techniques like drift detection and active learning can reduce the burden, anecdotal data shows that many companies spend up to 10-15% of revenue on this process – usually not counting core engineering resources – and suggests ongoing development work exceeds typical bug fixes and feature additions.

Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real time. Social media companies, for example, employ thousands of human reviewers to augment AI-based moderation systems. Many autonomous vehicle systems include remote human operators, and most AI-based medical devices interface with physicians as joint decision makers. More and more startups are adopting this approach as the capabilities of modern AI systems are becoming better understood. A number of AI companies that planned to sell pure software products are increasingly bringing a services capability in-house and booking the associated costs.

Everyone in the business knows about this. If you’re working with interesting models, even assuming the presence of infinite accurately labeled training data, the “human in the loop” problem doesn’t ever completely go away. A machine learning model is generally “man amplified.” If you need someone (or, more likely, several someone’s) making a half million bucks a year to keep your neural net producing reasonable results, you might reconsider your choices. If the thing makes human level decisions a few hundred times a year, it might be easier and cheaper for humans to make those decisions manually, using a better user interface. Better user interfaces are sorely underappreciated. Have a look at Labview, Delphi or Palantir’s offerings for examples of highly productive user interfaces.


 Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.


Software which solves a business problem generally scales to new customers. You do some database back end grunt work, plug it in, and you’re done.  Sometimes you have to adjust processes to fit the accepted uses of the software; or spend absurd amounts of labor adjusting the software to work with your business processes: SAP is notorious for this. Such cycles are hugely time and labor consuming. Obviously they must be worth it at least some of the time. But while SAP is notorious (to the point of causing bankruptcy in otherwise healthy companies), most people haven’t figured out that ML oriented processes almost never scale like a simpler application would. You will be confronted with the same problem as using SAP; there is a ton of work done up front; all of it custom. I’ll go out on a limb and assert that most of the up front data pipelining and organizational changes which allow for it are probably more valuable than the actual machine learning piece.


In the AI world, technical differentiation is harder to achieve. New model architectures are being developed mostly in open, academic settings. Reference implementations (pre-trained models) are available from open-source libraries, and model parameters can be optimized automatically. Data is the core of an AI system, but it’s often owned by customers, in the public domain, or over time becomes a commodity.

That’s right; that’s why a lone wolf like me, or a small team can do as good or better a job than some firm with 100x the head count and 100m in VC backing. I know what the strengths and weaknesses of the latest woo is. Worse than that: I know that, from a business perspective, something dumb like Naive Bayes or a linear model might solve the customer’s problem just as well as the latest gigawatt neural net atrocity. The VC backed startup might be betting on their “special tool” as its moaty IP. A few percent difference on a ROC curve won’t matter if the data is hand wavey and not really labeled properly, which describes most data you’ll encounter in the wild. ML is undeniably useful, but it is extremely rare that a startup have “special sauce” that works 10x or 100x better than somthing you could fork in a git repo. People won’t pay a premium over in-house ad-hoc data science solutions unless it represents truly game changing results. The technology could impress the shit out of everyone else, but if it’s only getting 5% better MAPE (or whatever); it’s irrelevant. A lot of “AI” doesn’t really work better than a histogram via “group by” query. Throwing complexity at it won’t make it better: sometimes there’s no data in your data.


Some good bullet points for would be “AI” technologists:

Eliminate model complexity as much as possible. We’ve seen a massive difference in COGS between startups that train a unique model per customer versus those that are able to share a single model (or set of models) among all customers….

Nice to be able to do, but super rare. If you’ve found a problem like this, you better hope you have a special, moaty solution, or a unique data set which makes it possible.

Choose problem domains carefully – and often narrowly – to reduce data complexityAutomating human labor is a fundamentally hard thing to do. Many companies are finding that the minimum viable task for AI models is narrower than they expected.  Rather than offering general text suggestions, for instance, some teams have found success offering short suggestions in email or job postings. Companies working in the CRM space have found highly valuable niches for AI based just around updating records. There is a large class of problems, like these, that are hard for humans to perform but relatively easy for AI. They tend to involve high-scale, low-complexity tasks, such as moderation, data entry/coding, transcription, etc.

This is a huge admission of “AI” failure. All the sugar plum fairy bullshit about “AI replacing jobs” evaporates in the puff of pixie dust it always was. Really, they’re talking about cheap overseas labor when lizard man fixers like Yang regurgitate the “AI coming for your jobs” meme; AI actually stands for “Alien (or) Immigrant” in this context. Yes they do hold out the possibility of ML being used in some limited domains; I agree, but the hockey stick required for VC backing, and the army of Ph.D.s required to make it work doesn’t really mix well with those limited domains, which have a limited market.

Embrace services. There are huge opportunities to meet the market where it stands. That may mean offering a full-stack translation service rather than translation software or running a taxi service rather than selling self-driving cars.

In other words; you probably can’t build a brain in a can that can solve all kinds of problems: you’re probably going to be a consulting and services company. In case you aren’t familiar with valuations math: services companies are worth something like 2x yearly revenue; where software and “technology” companies are worth 10-20x revenue. That’s why the wework weasel kept trying to position his pyramid scheme as a software company. The implications here are huge: “AI” raises done by A16z and people who think like them are going to be at much lower valuations. If it weren’t clear enough by now, they said it again:

To summarize: most AI systems today aren’t quite software, in the traditional sense. And AI businesses, as a result, don’t look exactly like software businesses. They involve ongoing human support and material variable costs. They often don’t scale quite as easily as we’d like. And strong defensibility – critical to the “build once / sell many times” software model – doesn’t seem to come for free.

These traits make AI feel, to an extent, like a services business. Put another way: you can replace the services firm, but you can’t (completely) replace the services.

I’ll say it again since they did: services companies are not valued like software businesses are. VCs love software businesses; work hard up front to solve a problem, print money forever. That’s why they get the 10-20x revenues valuations. Services companies? Why would you invest in a services company? Their growth is inherently constrained by labor costs and weird addressable market issues.

This isn’t exactly an announcement of a new “AI winter,” but it’s autumn and the winter is coming for startups who claim to be offering world beating “AI” solutions. The promise of “AI” has always been to replace human labor and increase human power over nature. People who actually think ML is “AI” think the machine will just teach itself somehow; no humans needed. Yet, that’s not the financial or physical reality. The reality is, there are interesting models which can be applied to business problems by armies of well trained DBAs, data engineers, statisticians and technicians. These sorts of things are often best grown inside a large existing company to increase productivity. If the company is sclerotic, it can hire outside consultants, just as they’ve always done. A16z’s portfolio reflects this. Putting aside their autonomous vehicle bets (which look like they don’t have a large “AI” component to them), and some health tech bets that have at least linear regression tier data science, I can only identify only two overtly data science related startup they’ve funded. They’re vastly more long crypto currency and blockchain than “AI.” Despite having said otherwise, their money says “AI” companies don’t look so hot.

My TLDR summary:

  1.  Deep learning costs a lot in compute, for marginal payoffs
  2. Machine learning startups generally have no moat or meaningful special sauce
  3. Machine learning startups are mostly services businesses, not software businesses
  4. Machine learning will be most productive inside large organizations that have data and process inefficiencies




Matthew Alhonte

Supervillain in somebody's action hero movie. Experienced a radioactive freak accident at a young age which rendered him part-snake and strangely adept at Python.