Making DNS More Secure And Private

The Domain Name System is something most people know little or nothing about, and frankly shouldn’t need to, but it is a critical backbone component of what makes the Internet work.
Like many other core aspects of the Internet, it was never designed to be secure or private, nor with the idea that one day billions of people would be using it.
A number of attempts have been made over the years to lock it down but aside from the politics of standards groups, it’s very complicated and any changes have profound implications because of the very scale of use of the Internet today.

But two new public DNS services that you can use instead of the one provided by your ISP could make a big difference, as long as you’re aware of the drawbacks in trusting them, too.

The good folks at TidBITS have a great write-up on all this, prompted by a new public DNS service from Cloudflare. I always enjoy articles like this and its a good primer on how DNS works for anyone who has ever wondered.

Everything old is new again: Neuroevolution in ML

Great article from Science magazine on a resurgence of research on neuroevolution, the idea of allowing neural networks to mutate and select the best performers, rather than teaching them:

Neuroevolution, a process of mutating and selecting the best neural networks, has previously led to networks that can compose music, control robots, and play the video game Super Mario World. But these were mostly simple neural nets that performed relatively easy tasks or relied on programming tricks to simplify the problems they were trying to solve. “The new results show that—surprisingly—you may actually not need any tricks at all,” says Kenneth Stanley, a computer scientist at Uber and a co-author on all five studies. “That means that complex problems requiring a large network are now accessible to neuroevolution, vastly expanding its potential scope of application.”

Also interesting that the papers cited were published by Uber. A lot of effort going into autonomous driving of course by a number of very well-resourced companies and the impact of all that money and effort is bearing fruit.

Google Brain Year in Review

The Four Horsemen (Apple, Amazon, Google and Facebook) are all making huge investments in AI and Machine Learning but it always feels like Google is at the forefront. They are also the most open and the Google Brain folks are doing amazing work, both research and applied, and in both software and hardware.

They just published a two part summary of their work in 2017 and its an impressive read.

Some of my favorites from Part 1 are AutoML (which you can now use yourself) and the TPU custom hardware they have built.

Part 2 covers application domains including healthcare, robotics, physical sciences and music.

Some impressive work is being done and it’s clear we are only scratching the surface. It feels like with custom hardware available as on-demand cloud resources and techniques like automated.machine learning ever closer to practical applications, we are going to see a wave of exciting applications and use cases in the next decade.

The Basic Ideas in Neural Networks

Consider it Throwback Thursday but with all the interest in Machine Learning, its easy to forget that many of the core ideas, such as neural networks, have been around for a long time. We were lacking the computing power and large data sets that Moore’s Law and The Cloud™ has brought us.

Here is a very readable paper [PDF] from 1994 on neural networks from Rumelhart & Widrow at Stanford.

I have to admit that when I was first introduced to neural networks in 1990 as part of my Computer Science degree I found them only mildly interesting. I considered a lot of AI researchers to be eternal optimists. What I didn’t foresee of course (and I don’t think I was alone here) was the impact of the rise of the Internet and the resultant massive data sets that would be generated as a result.

I For One Welcome Our Robotic Web Designer Overlords

There has been a lot of talk about robotics and AI eliminating more and more types of jobs over time. Initially these conversations tended to focus on manufacturing and other tasks that were highly repetitive and then evolved to include medical diagnosis and legal discovery (thanks to IBM’s Watson marketing efforts) among others.

An informative and entertaining short documentary on this is Humans Need Not Apply:

There is not much in this video that you probably don’t already know if you work in the technology industry. However in this same industry we tend to think of ourselves as highly skilled and not easily replaceable and the irony of that hubris is not lost on me.

Which leads us to Emil Wallner, who has a great post on a project to use deep learning to convert web page design mockups into code, automatically.

Its not going to actually replace anyone just yet (CNNs have been doing image analysis for a while now and the hierarchical and highly structured nature of HTML is well suited to the multiple layers approach of deep learning) but its a great tutorial introducing applying deep learning to the real-world (albeit a simplified version of it in this case.)

The $25 Billion Eigenvector

I am really enjoying spending more time indulging in some good old fashioned Computer Science. Getting back to theory and basics is a great reminder of what I love about computers, what they can do, what they can be.

Machine learning and associated AI topics are attracting a lot of interest these days (much of it warranted, some of it not) but every so often you need some good old fashioned linear algebra.

This paper [PDF] uses the academic equivalent of link baiting with a provocative title for what is really an applied discussion of linear algebra, using Larry & Sergei’s PageRank algorithm (or at least a simplified, public domain version of it.)

I often struggled in college to connect the abstraction of theory to its practical application AKA “how will I ever use this in the real world?”, so I love papers like this that try to connect the dots more. (You really need a discussion of Markov chains here too for the full picture, but thats another paper I guess.)