June 12, 2018

Go Native (App) or Go Home, and Other Key Takeaways from Apple’s WWDC 2018

I just returned from my first WWDC. I feel like I learned more in a week at Apple’s annual developer’s conference than I have in years of actually developing iOS apps. It was such a profound experience, in fact, that I sent the picture used as the header of this post to our Solutions team in our private messaging channel last week with the note, “WWDC as metaphor. From darkness into the light of knowing.” Here are just a few of the things I thought were most intriguing from my week in San Jose.

Go Native or Go Home

Apple is on a multi-year project to kill hybrid applications, at least those ones that rely on webView and run Javascript on device. Every year they put more of a squeeze on the ability of advertisers to target users in this environment. So any ad-supported cross-platform applications are going to have long term issue with their revenue models. At the same time, Apple (and Google too) provide value added services that need to be custom built no matter the language used. What this means is that to use the latest features of a platform, a hybrid app still needs an iOS developer and an Android developer, as well as a Javascript specialist. Requiring three skill sets rather than two does not seem a win.

I’m more convinced than ever that anyone developing iOS applications should build native applications rather than hybrid webView based applications. Hybrid application development emphasizes the process (“Hey, you only need to build it once!”) over the product. Customers don’t care that it takes you half the time to write a bad product. It’s still bad. One major coffee chain whose logo you’d instantly recognize may have stores on every corner but even they, with all their resources, still push out a cross-platform, bug-ridden, usability mess of a hybrid application. Contrast this with one Fortune 500 big box store whose mobile team I got the opportunity to spend some time talking with. They described the corporate journey for their mobile strategy. Basically the only way to get an accessible app (a killer feature for them) was native. Their story is much closer to how 3Pillar does things – start with the product outcome not the technology or the process.

Buckle Your Seatbelt for Apple Machine Learning

Apple Machine Learning is absolutely amazing. Train models on macOS using Create ML, use then on Apple platform using Core ML. Training the models leverages Transfer Learning, which means that a 90% complete model is provided by Apple and you only have to train the last part using Create ML running on macOS. Using image recognition in your applications is easily within reach of regular iOS development teams. The level of ML understanding required by developers is much lower than it has ever been. While Create ML is amazing, it still had limitations. It is great for specific problem sets such as image recognition. There are use cases where it is not appropriate, however. One example of this is where you are looking for minor differences in images. Create ML models used in Transfer Learning are tuned to treat minor differences as noise. Apple does succeed in delivering a narrow set of features very well.

Everyone Talks A Good Game About Privacy; Apple Actually Lives It

Apple’s approach to privacy is really impressive. Overall privacy is treated as a first-level product attribute at Apple just like usability, functionality, or security. An Apple privacy team is a multi-disciplinary group that addresses legal, compliance, and engineering aspects of privacy throughout the product development process. The privacy team engineers actually review code for compliance just as security teams inspect code for weaknesses. Communicating a focus on privacy and actually following through build trust and loyalty in customers, attributes that all companies want.

Hello, Siri.

Siri Shortcuts was one of the headline features at the keynote. It did not seem really exciting, but digging in I believe there is a lot of potential for 3Pillar customers. Specifically, Siri will prompt users to interact with apps based on previous behavior AND what they are doing in real time. Yes, you can send a notification to a user to remind them of something they do regularly, but that notification can easily get lost if the user is driving or otherwise distracted. Siri knows what the user is doing right now, so Siri can prompt the user about regular activities based on the current situation. That means that information will be received by the user when they are ready to consume it. For example if you leave for your morning commute 15 minutes late, Siri knows that and will shift the time that it recommends a latte for you. At the moment Siri is only recommending stuff based on your behavior, but it is not difficult to imagine that it will evolve to predict future behavior.

My Favorites

WWDC is known for its stellar presentations all around. My personal favorite from the week was Vision with Core ML, where you can watch as an object recognition model is trained and deployed onstage using Create ML and Core ML. Some other notable presentations included:

Wrapping it All Up

As always, Apple is one of the major players not only helping push the tech space forward but also making decisions that will have major effects downstream for companies of all shapes and sizes. Look no further, for example, than the focus they’re placing on pushing developers toward native apps vs. hybrid apps. Questions about what else I saw or heard at WWDC? Drop a note in the comments section below, or feel free to connect with me on LinkedIn.