Google Next – A Google cloud platform experience

I was lucky enough to be able to attend this years Google Next conference in London. With the tickets being free, I’m guessing there were a huge number of applicants so thanks for picking me Google.

I’d been excited about this conference ever since I got my ticket approved. The developer track sounded super cool and it certainly didn’t disappoint.

Keynote: Build What’s Next

For me the day started about 10.30am with some refreshments, before grabbing a seat at the front for the keynote by Carl Schachter (Vice President, Global Markets, Google Cloud Platform) and Greg DeMichillie (Director, Product Management, Google Cloud Platform). I’ll be honest, the keynote, did feel a little like a well rehearsed sales pitch.

Google asked a bunch of their customers up on stage to talk about their business and how GCP has helped their business evolve. I’ve always used AWS for cloud computing, but there were a couple of points that have made me consider using GCP instead. The top reasons were that cloud is inbred in Google, they are the true innovators of the web, leading the way in most areas and helping communities which they believe are also leading the way but don’t have enough resources. GCP is also significantly cheaper than AWS. Security was mentioned too, they made a point of letting everyone know it was their engineers that discovered the heartbleed bug. This meant that the GCP was patched before any other cloud providers.

From a technical front, Google does have some sweet tools. BigQuery and Dataflow being two of their main proprietary services, also their background with containers and work with Docker is another big plus over AWS. AWS does have Redshift, which is their BigQuery alternative, but BigQuery was built with searching Googles own humongous dataset in mind. Amazon obviously have a massive dataset too, but I’m not sure it comes to anywhere near to the size of Googles (Source: A fantastic What If article). AWS also released a container based solution that utilise Docker, however I doubt it’s going to be comparable to GCP’s due to the amount of effort Google are putting into the Docker project. Not to mention the Kubernetes tool for deploying clusters, which GCP uses. Obviously 6 or 12 months down the line, this could be a completely different story.

From Zero to Hero: A Developer’s Guide to Google Cloud Platform

Unfortunately I missed the first half of this talk by Robert Kubis, but the premise of the talk was an introduction to what GCP has to offer and how you can use those services to create a real world application. The real world application they used in the talk was Google Cloud Spin, which uses a collection of Android phones to takes photos of a person, upload them to the cloud and stitch them together into a video. The main GCP services they spoke about were:

Google App Engine – A simple VM solution for doing what you do best, coding cool stuff. GAE will automatically handle scaling and load balancing for you.
Google Computer Engine – This is quite similar to Google App Engine in theory, but is way more catered towards instances which are only required for a short amount of time.
Google Nearline – This is a cold storage services, similar to Amazon’s Glacier service, however where Amazon promise to have your data out of cold storage within hours, Google promises seconds.
Google BigQuery – An SQL like querying service, heavily used with Google’s own infrastructure, which enables querying petabytes of data in seconds.
Google Container Engine – This was only released by Google this week and is still in beta. However GCE will allow you to run Docker containers on the Google Cloud Platform. Whilst using the Kubernetes tool to help manage your clusters.

Real-time Mobile Games with AI: How hard can it be?

Terrance Ryan led this talk and it was my favourite of the day. Gaming check, real-time check, AI check. Terrance was a great speaker and managed to keep the audience entertained throughout. It didn’t take long before Firebase got its first mention of the day.

Firebase label themselves as a real-time backend as a service, essentially a platform as a service. This isn’t nearly enough information though, Firebase is a backend service powered by Node, MongoDB and Netty. Allowing you to create real-time applications with simple API calls from JavaScript, Swift, Objective-C or Java. Examples are also readily available for a collection of backend languages like PHP, Python and Ruby, although these are far less useful.

Firebase also provides an extremely simple approach to authentication against Facebook, Twitter, Google and Github. Strangely you won’t see Google’s brand plastered all over their website, as Google acquired the startup at the end of last year.

The presentation started by in showing off some sample applications created with Firebase, like real-time bus locations on Google maps, 2 player Tetris and a multi-user drawing board. Finally a multiplayer game of Asteroids was unveiled that uses a Firebase backend. There was a live demo with the audience, which easily had 50 concurrent users. After that, the code was finally shown and damn was it simple, Firebase literally hides all the complexity from you in an extremely elegant way. I’m not going to get into the code, but if you haven’t looked at Firebase already and you have a real-time project where you want to just get coding and let someone else worry about architecture, this could be your saviour.

Desired State: Containing Chaos with Kubernetes

I’ve been playing around with Docker a significant amount the last few months. Up until now I’ve only run development environments through Docker containers. Anyone that has tried using Docker to deploy a set of containers, that can indefinitely scale in and out as demand changes, would have quickly gone grey and then bald from ripping their hair out. Scaling is difficult to get right and when using the newest and coolest tech on the block, it becomes even more painful.

This time last year Docker announced a bunch of new tools at Dockercon. One of these tools was called Docker Swarm, it’s used to help set up container clusters that act and feel like a single Docker container. Unfortunately it hasn’t really taken off, as no one one seems to have found a way to use it, to deploy in production. There have also been a number of other tools coming out, like Mesos, CoreOS and Marathon, all doing similar things, so it can be tricky deciding what to use, if any.

Mandy Waite gave the talk about Google’s experience with containers over the last 10 years. So if you thought Docker stumbled upon the idea of containers a couple of years back, you’d be wrong. Over the last 6-12 months, Google have been working very closely with Docker in an effort to create a better, faster and more stable container solution for everyone. In that time they’ve built and released Kubernetes – Google’s own solution to clustering Docker containers. It’s still in beta and not recommended for production yet either, but if you’re interested in Docker it’s certainly worth getting a head start on.

Mandy spoke about how Kubernetes uses pods as fungible collections of containers and replication controllers which communicate with a master node to ensure a certain amount of pods always remain online. She also touched upon the notion of services which allow your different pods to share their IP’s. That probably sounds quite fuzzy, but imagine you have 3 Apache nodes and 10 MongoDB nodes. The Apache nodes are going to need to know the IP’s of the MongoDB nodes. This can be done when creating the cluster, but as your MongoDB nodes scale in and out, you’re going to need to keep the Apache nodes up to date. Services will do this for you.

Google, as of the Monday just gone, released the Google Container Engine as part of GCP which uses Docker and Kubernetes to build your cluster. AWS also have a service for creating Docker containers and using Kubernetes.

So yeah, if you’re still reading this, you’re probably a Docker user and you’re probably thinking that true love has finally found you. But let’s not get carried away. Kubernetes and even Docker have a long way to go as well as the support for them in cloud platforms. I highly recommend going through as much of the documentation as you can and just messing around with these new technologies.

There is no doubt in my mind that containers are the future of virtualisation, as a community we just need to work out the best way to utilise these technologies in the cloud with the ever growing demand for scale.

Your Data and the World Beyond MapReduce

Felipe Hoffa took the stage for the finale, to talk about BigQuery. You’ve heard that name before, right. Well most of us have, but few of us are lucky enough to work on such big data that requires the use of such a powerful technology.

At Google they like to say how MapReduce is dead, well for them that’s true, BigQuery is their next big thing and it’s being used to power just about every service they provide. However I don’t believe MapReduce is dead, it’s never going to match the speeds of BigQuery, but for the majority of applications and business that’s fine. Our data sets are significantly smaller and MapReduce is generally much more cost effective.

BigQuery can allegedly process petabytes of data a second and if that’s what you need then BigQuery is what you need too. There were a couple of demos given to us. One where 400GB’s of Wikipedia searches were queried for the regex cat*s, which completed in 1.3 seconds. That’s quite different from the petabytes they claim though. I didn’t find out the Big O complexity of BigQuery, so it quite possibly could be O(1) which would be damn right amazing. Lastly on the BigQuery front, it also supports an SQL like syntax meaning the learning curve is absolutely minimal. Brilliant.

Conclusion

Overall the day was very interesting, I’ve learnt a lot and taken away a lot of new ideas on how I can approach different challenges in the future.

0 Love This

Leave a Reply

Your email address will not be published.

Time limit is exhausted. Please reload the CAPTCHA.