Amazon Attempts to Put The ‘Convenience’ in Convenience Store

The New York Times has a short piece (with lots of photos) on Amazon’s new Go store opening this week is Seattle. The store is opening a year later than Amazon originally said it would but the premise is fascinating.

There are no checkouts or registers. You enter the store using the app, take what you want off the shelves and then just leave. The store detects what products you put in your bag and charges you.

Amazon made a video:

It is apparently smart enough to notice if you put something back and not charge you for it.

Amazon being Amazon, they don’t say much about the technology other than buzzword bingo (“deep learning”, “computer vision”, “sensor fusion”.)

GeekWire did some digging a little over a year ago and had an interesting report that cites some patent applications. One of the tidbits in that piece is a patent suggesting that if the store has difficulty figuring out whether you just picked up a bottle of mustard or a bottle of ketchup, they might use data from your previous purchases to determine which it is more likely to be.

While I am intrigued by the idea, I wonder if it feels (as the NYT reporter mentions) stressful at first when you simple leave a store “without” paying. I get stressed sometimes walking into a supermarket with a bottle of water I bought somewhere else and feel oddly guilty when I am using the self-checkout that I am not paying for the drink, wondering if people think I am shoplifting. Of course that might say more about me than anything else.

The broader adoption of computer vision in retail is going to be a very interesting area to watch, with some interesting cultural changes sure to come as part of it.

Why Does Apple Have so Much Cash?

The insightful Horace Dediu writes some of the easiest to understand pieces on Apple and its economic model. Many people know that Apple is sitting on a large cash pile, over $270 billion (with a “B”) but people ask me sometimes why they don’t spend it (by doing large acquisitions) or give it back to shareholders (dividends or share buybacks.) The reason they don’t do the former is cultural and they do actually do the latter. Horace has put together a great FAQ on all of this.

Going Global With Your Startup

The Y Combinator folks are on a roll with another great post, this time on Going Global With Your Startup.

Kwindla Hultman Kramer from highlights 5 key issues he has experienced and heard from other founders when it comes to selling your product or service outside the United States:

  1. Fulfilling international orders is still surprisingly complicated and expensive
  2. If you’re opening an office in a new country, put someone who already knows your company well in charge of that process
  3. Anywhere you have employees, you need an accountant and a lawyer
  4. Work visas are complicated, expensive, and stressful
  5. You’re going to need to get on an airplane sometimes

There is lots more detail and good advice in the post so you should read the whole thing. (I’ll wait…)

I have taken a software company global in both directions – first a company started in Europe (Ireland) that expanded into the United States, and then my current company Qstream, which started here in Boston and then expanded into Europe (Ireland and the UK.)

I have no experience with #1 because my companies have never made physical products. (Technically I guess that is not quite true because my first company was long enough ago that we originally shipped the product on 3.5″ floppy disks and later on CDs. Good times. Good times.)

However, I have dealt with #2 through #5 extensively and strongly agree with the recommendations. I have spoken to groups of entrepreneurs and individuals in the past about going global and you can definitely short-circuit learning a lot of lessons the hard way by talking to someone who has done it.

It is still remarkably complicated to set-up and run a business in multiple locations, particularly for a start-up that is resource constrained. You also need people who know the local culture, customs and laws. If you can get the balance right (don’t rush to do it, take it slow, one country at a time) then it can be a big competitive advantage.

Microsoft Wrote a Book on The Ethics of AI

The good folks at Microsoft have published a book [PDF] (“The Future Computed: Artificial Intelligence and its role in society”) and associated web site on the ethics of AI.

The Future Computed

The executive summary cites the following proposed principles:

  • Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
  • Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
  • Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.
  • Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
  • Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
  • Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world.

Technology itself of course is just a tool and tends to be inherently amoral (in the sense of lacking the concept of morals.)

Those of us who create it and use it are a different story.

We can choose whether to use a technology and how to use a technology and there are very real implications for society, short and long term, for all these choices. What we can’t do is un-invent it. (Much as occasionally that seems like a good idea.)

What we can, and must, do as a society is decide on a set of ethical principles of how we will use any such technology.

This is also complicated when something is brand new, or, as in the case of AI, technology reaches a new level of critical mass it had not previously achieved.

Due to its newness, as a society we usually find that we haven’t developed these principles yet. Also in our world of software the rate of change tends to be very high. It can feel like we have gone from something not existing to being pervasive in a heart beat.

Many people in the AI community have been debating ethics around AI for decades, so like AI itself as a discipline, this is not a new topic and in many ways the Microsoft book does not break new ground. With the soon-to-be pervasiveness of AI though, Microsoft has a global platform to bring a much greater awareness to the issues and that is to be applauded.

Some of our brightest minds have very publicly raised their concerns of the potential problems with AI, notably Elon Musk and Stephen Hawking. We should use their opinions and books like this to have informed debates as we continue to push AI forward.

Google Brain Year in Review

The Four Horsemen (Apple, Amazon, Google and Facebook) are all making huge investments in AI and Machine Learning but it always feels like Google is at the forefront. They are also the most open and the Google Brain folks are doing amazing work, both research and applied, and in both software and hardware.

They just published a two part summary of their work in 2017 and its an impressive read.

Some of my favorites from Part 1 are AutoML (which you can now use yourself) and the TPU custom hardware they have built.

Part 2 covers application domains including healthcare, robotics, physical sciences and music.

Some impressive work is being done and it’s clear we are only scratching the surface. It feels like with custom hardware available as on-demand cloud resources and techniques like automated.machine learning ever closer to practical applications, we are going to see a wave of exciting applications and use cases in the next decade.

Inside One of America’s Last Pencil Factories

Despite my love of technology, you just can’t beat a pencil for writing. I have tried every stylus-like device in the last two decades, from Palm Pilots to Newtons to Motion Computing tablets to Wacom tablets to the Apple Pencil (which is the best of all of them.)

They still don’t beat the feel and ease of use of a good old-fashioned pencil. The sound it makes as it moves across the page. The feel of the graphite against paper is very satisfying. There is a reason modern pencils have been around for over 200 years.

I switched to mechanical pencils years ago (my current favorite is this one) and still write with one in a Moleskine notebook for taking notes in meetings, gathering my thoughts or making lists.

Until I read this great piece in the New York Times magazine, looking inside one of the last pencil factories in America, I had no idea how they were made. There is some gorgeous photography of the manufacturing process as well. Ultimately I am sure the end of the pencil will come, but not just yet it seems.

Security in iOS 11

Apple has been getting more and more detailed about documenting the end-to-end security in iOS in the last few years as part of its broader focus on security, encryption and privacy.

As part of that, they have been publishing security white papers detailing both hardware and software security in their mobile OS.

They recently published the latest version of this in a document with the scintillating title of iOS Security. [PDF]

It’s well worth a read and always fascinating to me to see both the lengths they go to to appropriately hide the complexity of the technological solution from the user (who shouldn’t need to worry about it) and also the benefits they get from designing both the hardware and software.

How Star Wars Was Saved in The Edit

One of my favorite screenwriters to follow is Ken Levine. He has had an amazing career in television and features writing for MASH and Cheers among many others, as well as directing.

He has great comedy chops and I always learn something from his posts. (He is also a professional sports announcer, so his talent apparently know no bounds. Sickening.)

He came across a fascinating 18 minute YouTube video on how poor the pacing and story telling were in the original rough cut of Star Wars and how much impact Lucas’ editors had in getting us to the final cut we know and love. (At least the final original theatrical cut. Lucas likes to tinker.)

If you are interested in story-telling and getting a better appreciation for the importance of editing, it’s well worth the time.

Scripto, The App That Stephen Colbert Helped To Build

I’ve been a Final Draft user for more than a decade and am pretty familiar with the writing tools available for screenwriting but I had never heard of Scripto until I saw this New Yorker article.

Stephen Colbert and one of his writers, Rob Dubbin, (who likes to code on the side) spent a year tinkering with a new collaborative writing tool suitable for late night comedy news shows like their own. After a year it was in production use on their show and today it’s being used by a host of other shows too.

It’s not designed to be a competitor to something like Final Draft as it is geared not just for collaborative environments but also live (or live to tape) television production (feeding TelePrompTers etc.)

I had also never heard of what sounds like an antiquated piece of software from the AP called ENPS which is apparently used by hundreds of newsrooms. (To be fair given that it was designed around more traditional newsrooms it may work well in that use case and just be ill-suited to the different environment of late night news comedy. On the other hand, while the AP is a fine news organization, it isn’t exactly where I would think to go for cutting edge software or UX, so there’s that.)

There is not a lot of information in this short article but I did find what appears to be the Scripto web site. Clearly going for the minimalist approach there. I am fascinated by this product now and want to know more about how it’s built (it appears to be browser-based), what its features are and how much it costs.

Of course I am not a late night comedy news show. I don’t even play one on television. So I am not in the target market. But the whole project sounds cool.

Advice For First Time Founders

Y Combinator has a great post that starts with 3 questions:

1. What are some things that you should’ve known as a first-time founder but did not?
2. How did you learn them?
3. How did they help?


They collected responses from a bunch of founders and there is some great stuff in here.

Part of me wonders though how much it will resonate with someone who hasn’t yet dealt with the issues raised.

I see so much good advice in here, based on lessons learned the hard way across the 3 companies I have started, but part of doing a tech startup is having a certain amount of healthy delusion. Delusion about how good your idea is and your likelihood of succeeding.

This is necessary because startups are one of those endeavors where if you knew how hard it was going to be and how long it was going to take, you would never do it. People will tell you you are crazy. You will hit roadblock after roadblock. You need this delusion to persevere.

But it is also your Achilles Heel. How do you recognize good advice (on hiring for example) but ignore bad advice (on your go-to-market model say)? Here your healthy delusion can become just plain old delusion and you end up having to learn the lessons the hard way.

I’ll let you know if I figure out the answer to that one. In the meantime, the responses are full of pearls of wisdom. Also, as is often the case on Hacker News, the comments are great too.