Lynx Roundup, April 22nd 2018

Lynx Roundup, April 22nd 2018

Data science with Snkia.

Matthew Alhonte
Matthew Alhonte

Pretty rad, also includes a link to a longer ebook on the subject.

Command Line Tricks For Data Scientists
For many data scientists, data manipulation begins and ends with Pandas or the Tidyverse. In theory, there is nothing wrong with this notion. It is, after all, why these tools exist in the first…
Mathematicians have discovered how the universal patterns behind innovation arise
The work could lead to a new approach to the study of what is possible, and how it follows from what already exists.
Scientists Tested How Much Know-It-Alls Actually Know, And The Results Speak For Themselves
People who think their knowledge and beliefs are superior to others are especially prone to overestimating what they actually know, new research suggests.

Never used this, but looks interesting!

https://simonwillison.net/2018/Apr/20/datasette-plugins/

There's a lot of overhead involved in making even pretty trivial blockchain stuff.  I made a demo app with Hyperledger last year, and it took a lot just to be able to set it up and have it be viewable.

Introducing AWS Blockchain Templates for Ethereum and Hyperledger Fabric
Introducing AWS Blockchain Templates

Very neat bit of...Philosophy of Science?  Sociology of Engineering?

The Origins of Opera and the Future of Programming
At the end of this post is an audacious idea about the present and future of software development. In the middle are points about mental models: how important and how difficult they are. But first, a story of the origins of Opera.
Your Body Is a Teeming Battleground
Ehrenreich proves a fascinating guide to the science suggesting that our cells, like the macrophages that sometimes destroy and sometimes defend, can act unpredictably and yet not randomly.

I'm definitely in the camp that says AI is an experimental science (particularly Deep Learning stuff, but also generally).  But hey, don't take my word for it, here's Turing hisself from the original AI paper: "Machines take me by surprise with great frequency...The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false."  (by the way, if you've never read it, it's extremely readable and absolutely worth reading)

https://aiweirdness.com/post/172894792687/when-algorithms-surprise-us

Along somewhat similar lines (though trying to do the opposite):

Interpretable Machine Learning with XGBoost
This is a story about the danger of interpreting your machine learning model incorrectly, and the values of interpreting it correctly. If you have found the robust accuracy of ensemble tree models…

I think this really gets at the heart of where quant-y people get crossed up when they have to do certain types of code-y things.  As soon as I'm outside of the realm of "doing operations on data", I'm on edge.  If I'm, say, interacting with a database from Python, I'm generally doing so in a way that hides as many of the details of connections & cursors as much as possible.

Introduction to Bayesian Linear Regression
The Bayesian vs Frequentist debate is one of those academic arguments that I find more interesting to watch than engage in. Rather than enthusiastically jump in on one side, I think it’s more…
Roundup

Matthew Alhonte

Supervillain in somebody's action hero movie. Experienced a radioactive freak accident at a young age which rendered him part-snake and strangely adept at Python.