Blue Box Powers Travis CI's Next Generation Infrastructure

Mathias Meyer's Gravatar Mathias Meyer,

We have some great news for you today: Last weekend we finally switched all builds on http://travis-ci.org to our new build infrastructure. This setup not only gives you more processing power (1.5 cores per build) and twice as much memory (3 GB), it runs in container-virtualized hardware and directly off SSDs. It’s also a 64bit platform with the option of supporting 32bit environments as well!

All this has been made possible by our amazing partner, Blue Box, who have provided us with a customized private cloud setup, tailored specifically for our needs as a hosted continuous integration platform.

This new platform allows us to do many things a lot better than with our previous, manually managed VirtualBox setup. We can easily add more capacity as we grow (and boy, we’ve grown a lot over the last months!), and we can provision updates to our build environments a lot faster than before, allowing us to provision new language releases, framework updates and service additions much quicker than before.

We’re pretty psyched about this new partnership, it’s great news for the open source community as we can follow the build demand a lot better than before and update the build environment much faster too. We can also leave the infrastructure management parts in the amazing hands of the Blue Box team and focus on shipping new features.

You should give Blue Box a nice high five next time you meet someone from their awesome team. They’re amazing folks to work with, and we’re looking forward to working closely with them to push Travis CI forward!

We’re in the process of rolling this out for our customers on http://travis-ci.com as well, stay tuned!

Make sure to read the official press release too!

Thanks Blue Box, you’re awesome!


An Update on Travis Pro

Mathias Meyer's Gravatar Mathias Meyer,

It’s been a while since our last update on Travis CI for private repositories, our hosted product version of Travis CI.

We’ve launched into a private beta last July, and not only has a lot of time passed since then, we’ve made some great progress.

In September we started moving the platform into a paid private beta mode, and I’m happy to report that by now, we have more than 220 paying organizations using the platform. On a daily basis, we’re running close to 6000 builds.

While this may sound insignificant compared to the now more than 21000 builds we run every day on our open source platform, test suites are usually a lot more complex and long-running for private projects and products.

What’s been keeping us so long to go public?

To be able to open the platform for everyone, we had several things we wanted to solve first. The good news is, we’re almost there.

We rewrote our build process to be a lot more stable, we’re in the process of moving our build infrastructure away from VirtualBox, and we’ve deployed one of the last outstanding fixes for us to have more reliable and scalable build log processing.

In short, we wanted to have the greatest confidence in our system being able to handle a much larger number of customers more easily than it did before.

We’re slowly starting to move customer projects to our new infrastructure, and once we’re able to fully decommission our VirtualBox setup, it’ll be a lot easier for us to grow our infrastructure with the number of customers coming on board.

So how long until we go public?

Sorry that we’re not in the position to give a specific date yet, but we’re frantically working on opening up the platform for everyone. Lots of folks are waiting on our beta list, and we’re sorry we’ve kept you waiting for so long.

If you signed up for our beta list, you’ll be hearing from us soon!


Client and API isolation

Piotr Sarnacki's Gravatar Piotr Sarnacki,

Travis CI web client is a javascript application written in Ember.js. What’s really interesting here, is that the web application and the API app live in completely separate repositories and run on different subdomains. Such kind of isolation is possible, because of CORS (Cross Origin Resource Sharing), which, in simple words, allows you to make ajax requests to a different domain.

Why?

At a first glance, such setup may seem more as a way to complicate things rather than help, but let me explain how it really help us.

For me, one of the biggest advantages of having more than one app are the independent deployments. We deploy some of our applications a lot (even several times a day), but we rarely touch the most important and stable parts of our infrastructure. Similarly, I can deploy a web client several times and if I break something, the API still works correctly, which is great because the API may be used by other clients as well.

But is it just about deployment? Of course not! It may seem that we need to have 2 apps running during development: the API and the web client itself. Most of the time this is not the case. As I mentioned above, we use CORS, which allow us to make requests to other domains. When I work on the client, I usually connect it to the production API. Have you ever needed to copy part of the production database, clean sensitive data and use it in development to catch any issues with the data you haven’t thought about? Not anymore! During development we can be connected to staging or production API or to the local API server.

Recently we wanted to show the changes in the client in some easy way, without the need to deploy to the staging server. The idea was simple:

  • after finishing the build on travis, upload the assets to the S3 bucket using travis artifacts
  • when a parameter is passed to the web app (like: `https://travis-ci.org?alt=my-feature-branch), start serving assets from S3 for a given branch, also set cookie to serve alternative assets on subsequent requests
  • when default branch name is passed as a param, revert to the regular assets, which are currently deployed

Now I can just push my feature branch, wait for a build on travis to finish and pass the url to someone to whom I would like to show the results. It’s so much better than just using staging! That way it behaves more naturally as it uses production data and I don’t even need to deploy anything.

You may be wondering if this is safe? In fact I’m constantly using the client in development to use the API, won’t I break anything? It is a valid question and in the case of some applications it could be risky. Imagine that you’re working on an admin application frontend. If you plug it to the production API you may accidentally fire a request, which will make a mess.

However, in our case it’s not a problem. The API does not let you do almost any destructive things and even if it did, in the worst case we would mess up our own accounts.

How?

As I mentioned earlier, we use CORS, to make such isolation possible. CORS is fairly simple to set up and use. When you fire up an ajax request to the different domain, the browser should automatically try to use CORS. Before making the actually request, the OPTIONS request should be sent in order to check if an endpoint accepts CORS.

In order to see how it works in action with Travis API, you may try to use such curl request:

curl --verbose --request OPTIONS \
     --header "Accept: application/json; version=2" \
     https://api.travis-ci.org/jobs

The response should look something like:

< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Headers: Content-Type, Authorization, Accept, If-None-Match, If-Modified-Since
< Access-Control-Allow-Methods: HEAD, GET, POST, PATCH, PUT, DELETE
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: Content-Type, Cache-Control, Expires, Etag, Last-Modified
< Content-Type: text/html;charset=utf-8
< Content-Length: 69
< Connection: keep-alive

It basically specifies how should browser behave when issuing a request to such endpoint, ie.:

  • which HTTP methods can it use? (Access-Control-Allow-Methods)
  • which headers can a browser sent with the actual request? (Access-Control-Allow-Headers)
  • which headers can it expose to the javascript client? (Access-Control-Expose-Headers)
  • what places can a request come from? (Access-Control-Allow-Origin)
  • can a browser send credentials with Authorization header? (Access-Control-Allow-Credentials)

For more info on CORS, you should check out the (CORS specification)[http://www.w3.org/TR/cors/].

Our API is written in Sinatra and CORS support is fairly easy to implement. Of course you may need a bit more code if you want to pass different options for different endpoints.

On the client side, we don’t have to do anything, the browser will just do the proper thing.

Any caveats?

As you may noticed, there is one small disadvantage of using CORS. A browser needs to fire an additional OPTIONS request before making the actual request. Handling such a request is really fast, because most of the time you don’t need to do anything more than setting up a few headers, but you pay the cost of doing the request anyway.

Depending on your situation it may or may not be a real problem. Thankfully, the web is moving forward and with growing popularity of SPDY and with the HTTP2.0 spec in the works, the number of requests should have smaller and smaller significance.

It’s also good to know that not all of the browser support CORS. If you want to use it, you may want to check if you can ignore the browsers which does not support it.

Summary

API and client isolation is awesome and you should try it!

I would also like to remind all of you reading this post, that I work on Travis full time thanks to Engine Yard, they’re sponsoring Travis CI and a lot of other OSS projects!