Built Like The Brain: Neuromorphic Hardware – Low Power, High Speed

According to this article in Nature:

Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain.

We have seen the rise of ML first shift work to GPUs (which were designed for large amounts of linear algebra needed for non-linear editing and video games, making them inadvertently well suited for ML tasks).

We have seen the advent of FPGA and then dedicated hardware for ML, especially Google’s exciting work around Tensor Processing Units.

These are all still based on traditional approaches to computing and essentially classic von Neumann architecture.

Perhaps this new work will lead to a generation of hardware that combines neuroscience, electronic engineering and computer science. There is a level of energy efficiency and speed in the human brain that we are not yet close to matching, even as the raw computing power of ML and distributed systems increases exponentially. Carver Mead’s neuromorphic computing may finally become a practical reality.

Everything old is new again: Neuroevolution in ML

Great article from Science magazine on a resurgence of research on neuroevolution, the idea of allowing neural networks to mutate and select the best performers, rather than teaching them:

Neuroevolution, a process of mutating and selecting the best neural networks, has previously led to networks that can compose music, control robots, and play the video game Super Mario World. But these were mostly simple neural nets that performed relatively easy tasks or relied on programming tricks to simplify the problems they were trying to solve. “The new results show that—surprisingly—you may actually not need any tricks at all,” says Kenneth Stanley, a computer scientist at Uber and a co-author on all five studies. “That means that complex problems requiring a large network are now accessible to neuroevolution, vastly expanding its potential scope of application.”

Also interesting that the papers cited were published by Uber. A lot of effort going into autonomous driving of course by a number of very well-resourced companies and the impact of all that money and effort is bearing fruit.

The Next Decade: The S Curve of Machine Learning

A little over 20 years ago, I read something from a technology analyst that completely changed the way my early twenties self thought about the world, that has stayed with me ever since.

I had already been using the Internet for about 8 years at that point, initially as an academic network and now in the early days of its commercial use.

I was one of the founders of a technology company and had just moved to San Francisco to set up the U.S. operation for. At the time I thought I was pretty much on the cutting edge because I had an ISDN line in my apartment, which was essentially like a much faster version of dial-up. (It could even bond two channels together for a 128kbps experience.)

The analyst was talking about Yahoo! and in particular its 12 month target stock price. The details of that are lost to time, just like the company itself essentially. (Hello Verizon.)

The comment that he made was that you need to think not of the world as it was then (small numbers of millions of people on the Internet, almost all using 28.8kbps dial-up) but rather the world as it will be. The comment that stuck with me was something to the effect of “Imagine everyone has an always-on high speed Internet connection and you can take that as a given. Now what kind of applications can you build on it?”

Timing is always the hard part but that concept has influenced not only my thinking but the three companies I have started.

I was reminded of it again when listening to Ben Evans talk about S curves and what the future might look like in another 10 years. He touches on mixed reality and crypto-currencies but he spends a good deal of time providing one of the clearest business explanations for machine learning that I have seen. Well worth a watch.

BTW, my favorite line is “every person in this image is a cell in a spreadsheet and the entire building is an Excel file” when talking about automation and referencing this scene from the 1960 Billy Wilder movie The Apartment:

Amazon Attempts to Put The ‘Convenience’ in Convenience Store

The New York Times has a short piece (with lots of photos) on Amazon’s new Go store opening this week is Seattle. The store is opening a year later than Amazon originally said it would but the premise is fascinating.

There are no checkouts or registers. You enter the store using the app, take what you want off the shelves and then just leave. The store detects what products you put in your bag and charges you.

Amazon made a video:

It is apparently smart enough to notice if you put something back and not charge you for it.

Amazon being Amazon, they don’t say much about the technology other than buzzword bingo (“deep learning”, “computer vision”, “sensor fusion”.)

GeekWire did some digging a little over a year ago and had an interesting report that cites some patent applications. One of the tidbits in that piece is a patent suggesting that if the store has difficulty figuring out whether you just picked up a bottle of mustard or a bottle of ketchup, they might use data from your previous purchases to determine which it is more likely to be.

While I am intrigued by the idea, I wonder if it feels (as the NYT reporter mentions) stressful at first when you simple leave a store “without” paying. I get stressed sometimes walking into a supermarket with a bottle of water I bought somewhere else and feel oddly guilty when I am using the self-checkout that I am not paying for the drink, wondering if people think I am shoplifting. Of course that might say more about me than anything else.

The broader adoption of computer vision in retail is going to be a very interesting area to watch, with some interesting cultural changes sure to come as part of it.

Microsoft Wrote a Book on The Ethics of AI

The good folks at Microsoft have published a book [PDF] (“The Future Computed: Artificial Intelligence and its role in society”) and associated web site on the ethics of AI.

The Future Computed

The executive summary cites the following proposed principles:

  • Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
  • Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
  • Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.
  • Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
  • Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
  • Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world.

Technology itself of course is just a tool and tends to be inherently amoral (in the sense of lacking the concept of morals.)

Those of us who create it and use it are a different story.

We can choose whether to use a technology and how to use a technology and there are very real implications for society, short and long term, for all these choices. What we can’t do is un-invent it. (Much as occasionally that seems like a good idea.)

What we can, and must, do as a society is decide on a set of ethical principles of how we will use any such technology.

This is also complicated when something is brand new, or, as in the case of AI, technology reaches a new level of critical mass it had not previously achieved.

Due to its newness, as a society we usually find that we haven’t developed these principles yet. Also in our world of software the rate of change tends to be very high. It can feel like we have gone from something not existing to being pervasive in a heart beat.

Many people in the AI community have been debating ethics around AI for decades, so like AI itself as a discipline, this is not a new topic and in many ways the Microsoft book does not break new ground. With the soon-to-be pervasiveness of AI though, Microsoft has a global platform to bring a much greater awareness to the issues and that is to be applauded.

Some of our brightest minds have very publicly raised their concerns of the potential problems with AI, notably Elon Musk and Stephen Hawking. We should use their opinions and books like this to have informed debates as we continue to push AI forward.

Google Brain Year in Review

The Four Horsemen (Apple, Amazon, Google and Facebook) are all making huge investments in AI and Machine Learning but it always feels like Google is at the forefront. They are also the most open and the Google Brain folks are doing amazing work, both research and applied, and in both software and hardware.

They just published a two part summary of their work in 2017 and its an impressive read.

Some of my favorites from Part 1 are AutoML (which you can now use yourself) and the TPU custom hardware they have built.

Part 2 covers application domains including healthcare, robotics, physical sciences and music.

Some impressive work is being done and it’s clear we are only scratching the surface. It feels like with custom hardware available as on-demand cloud resources and techniques like automated.machine learning ever closer to practical applications, we are going to see a wave of exciting applications and use cases in the next decade.

The Basic Ideas in Neural Networks

Consider it Throwback Thursday but with all the interest in Machine Learning, its easy to forget that many of the core ideas, such as neural networks, have been around for a long time. We were lacking the computing power and large data sets that Moore’s Law and The Cloud™ has brought us.

Here is a very readable paper [PDF] from 1994 on neural networks from Rumelhart & Widrow at Stanford.

I have to admit that when I was first introduced to neural networks in 1990 as part of my Computer Science degree I found them only mildly interesting. I considered a lot of AI researchers to be eternal optimists. What I didn’t foresee of course (and I don’t think I was alone here) was the impact of the rise of the Internet and the resultant massive data sets that would be generated as a result.

I For One Welcome Our Robotic Web Designer Overlords

There has been a lot of talk about robotics and AI eliminating more and more types of jobs over time. Initially these conversations tended to focus on manufacturing and other tasks that were highly repetitive and then evolved to include medical diagnosis and legal discovery (thanks to IBM’s Watson marketing efforts) among others.

An informative and entertaining short documentary on this is Humans Need Not Apply:

There is not much in this video that you probably don’t already know if you work in the technology industry. However in this same industry we tend to think of ourselves as highly skilled and not easily replaceable and the irony of that hubris is not lost on me.

Which leads us to Emil Wallner, who has a great post on a project to use deep learning to convert web page design mockups into code, automatically.

Its not going to actually replace anyone just yet (CNNs have been doing image analysis for a while now and the hierarchical and highly structured nature of HTML is well suited to the multiple layers approach of deep learning) but its a great tutorial introducing applying deep learning to the real-world (albeit a simplified version of it in this case.)