Tredly Command Center

By on July 5, 2016 in Blog

How do you make awesome container technology like Tredly even better – you create an awesome web based management system for it.

Tredly Command Center was built allow you to easily manage all the features of Tredly using a beautiful and easy to use interface. Tredly Command Center allows you to many advanced things with Tredly that would be very difficult to do manually. Things like:

  1. Viewing the status of all your Hosts, Partitions, Containers and Container Groups
  2. Load balancing Partitions – create a new partition, select the partition you want to load balance and within a few minutes you will have a load balanced partition.
  3. Searching for Partitions and Containers over multiple Tredly-Hosts.

Tredly Command Center comes installed on every Tredly Host by default. We are working to have an online version available so using Tredly Command Center to manage your Tredly Hosts is even easier.

Here are some screenshots.





Tredly V1 released

By on June 23, 2016 in Blog

After a number of years in development and testing, we are excited to announce Tredly V1 has been released on Github.

Tredly V1 custom FreeBSD iso is available here.

We are already working on Tredly V1.1 features and hope to have it in your hands very soon.


How many Containers can you run on Tredly?

By on May 13, 2016 in Blog

How many containers can you run on a single Tredly server – we get asked this a lot and the answer depends on a couple of things:

  1. The hardware resources (CPU, HDD, RAM)
  2. How much resources your containers require

Disregarding the above, you can run (using a single public IP):

  1. Unlimited number of containers servicing HTTP/HTTPS connections
  2. Up to 65000 containers receiving traffic on TCP ports
  3. Up to 65000 containers receiving traffic on UDP ports

Throw in another public IP and you get  65,000 more TCP and UDP Ports (130,000 total).

Try running a couple thousand old school virtual machines on a single server.

Containers are fantastic for scale.


Organizing your Containers

By on May 10, 2016 in Blog

When you are running several thousand containers, organizing them is pretty important – Tredly created Partitions to help you do that.

Now that you have your containers organized, how do you structure them? Since containers are all about giving you choice, you can structure them any way you please BUT how does this translate into the real world?

Tredly has three simple concepts which you can leverage to give you maximum flexibility and scalability:

  1. Partitions
  2. Containers
  3. Container Groups


Partitions are great for things like environments – Production, Staging etc. Partitions have a heap of other functions that you can leverage outside organizing and structing your containers – read more about partitions here.

Containers and Container Groups

This is where we get into the nitty gritty of structuring your containers. Containers can be structured as stand alone virtual machines or grouped together as container groups.

On the surface it may seem like container groups are unnecessary and they probably are if you only want to leverage the basic benefits of containers. Where container groups come into their own:

  1. Services where there is no authentication mechanism but want security
  2. Services where you do not want to use an authentication mechanism but want security.
  3. Services you want to use together but have different workload abilities

Services where there is no authentication mechanism but want security:

  1. php-fpm
  2. Queuing software like Gearman or RabbitMQ

Traditionally you would solve the no authentication mechanism issue by installing these services on the same container. With Tredly container groups, you group them together and they can only talk to each other = security issue solved.

Services where you do not want to use an authentication mechanism but want security:

  1. MySQL
  2. reddis
  3. MariaDB
  4. PostgreSQL

Traditionally you would solve this issue by installing these services on the same container. With Tredly container groups, you group them together and they can only talk to each other = security issue solved.

Services you want to use together but have different workload abilities:

  1. nginx – can handle up to 100,000 connections a second
  2. php-fpm – probably shouldn’t be handling more than 1,000 jobs at once

Traditionally you would solve this issue by installing these services on the same container and hope that php-fpm didn’t crash and bring down nginx as well – its never a good idea to hope. Using Tredly container groups you can configure services to work together to solve this. Configure an nginx container to service 100,000 connections a second and then configure a php-fpm container to service up to 1,000 jobs at once. You then set replicate=yes in the php-fpm container Tredlyfile and Tredly will automatically bring up new php-fpm containers as load increases and load balance the jobs sent from the nginx container to the php-fpm containers. If your nginx container is getting 100,000 connections a second, Tredly will create enough php-fpm containers to service that load without you having to lift a finger. In doing this Tredly allows you to scale your server infrastructure properly.



Container Danger

By on May 9, 2016 in Blog

Because containers are so great for collaboration, developers and sysadmins have taken to them like ducks with water – containers and the OS to run them are the final piece in making the development cycle smooth.

The great thing about containers is they hold the entire blueprint of your App, and the entire thing could fit in 20kb of files – its very easy to share something that fits in 20kb. When you can keep everything you need to run your App inside a few files, its much easier to keep credentials, SSL certificates and other sensitive inside the container as well – your only publishing it locally anyway.

Where developers/sysadmins come unstuck is when they share their container with the outside world. Their App is finished so they push it to Github to make it available to everyone – forgetting to delete some sensitive information and then bad things like this or this happen:

These sorts of mistakes are not really the fault of developers, there really isn’t any other easy/secure option to create a container without having the required credentials or SSL certificates inside it.

We wanted solve this issue for developers, so we did. When you create a container on Tredly, you create it inside a Partition. Each Partition in Tredly has an area where you can store files or folders which are only accessible by the containers created within it. When creating your Tredlyfile you can reference the files and folders stored in the Partition, there is no need for you to store credentials, SSL certificates or other sensitive information in your Tredly containers.

Tredly Architecture

Organise your Containers with Partitions

By on May 7, 2016 in Blog

A big issue with being able to run thousands of Containers on a single Tredly-Host is trying to manage them all. When you are running a thousand Containers, wading through the list of them, trying to find the one you want is very difficult. We needed a way to organize Containers – so we created Partitions.

When you create a Container on Tredly, you can select to create them in a particular Partition or it is created inside the Default Partition. This is fantastic for things like Prod and Stage environments.

This solved our organization issue but developing partitions to help us organize containers gave us some more ideas.

What if we extended the partition to have all the functionality of a virtual machine without the overhead.

  1. If we implemented CPU, RAM and HDD limits on a partition, the container within it couldn’t use more than the resources allocated to it. This would allow us to run containers without worrying about containers using more resources than they should.
  2. If we implemented restrictions on what IP addresses could access certain Partitions we could protect development code from the full scrutiny of the internet.

Trying to make container management easier resulted in some really helpful new functionality.


Containers – modify or recreate

By on May 6, 2016 in Blog

When Tredly was first being built, this was one of the first questions posed – should we modify a container or recreate it instead?

At first thought its easy to go with modify because its just easier – modifying a container would be faster, there will be no downtime and its easy to change a few things. Except our assumptions were wrong.

  1. What if we are using a loadbalancer – you will need to update multiple containers at the same time.
  2. Modifying a container might take 5 minutes – recreating a container takes at most 1 minute.
  3. Tredly allows you to recreate containers without any downtime at all – so downtime wasn’t an issue
  4. Modifications are not repeatable – the beauty of containers is repeatability.
  5. You can test your changes by building your updated container in another environment – modifying code or configurations live is bad.

So recreating is better in every way BUT:

There are some use cases where rebuilding doesn’t work:

  1. You have a database container and want to be able to update it without downtime.
  2. You have a container that stores data sent to it over rsync.

Persistent storage is an issue when “rebuild” is the option of choice and we have yet to solve this issue for all use cases. For example if you want no downtime when rebuilding a PostgreSQL container, the old container needs to relinquish the database files and the new container pull them in. If your database is several GB, that cannot happen quickly = downtime. We hope to come up the magic bullet to solve this soon = fingers crossed we have a Eureka moment.


Containers = repeatabilty

By on May 4, 2016 in Blog

Choice as I discussed previously is the containers biggest benefit but repeatability is another massive benefit of containers. At Vuid, they use containers for everything, even when it might not immediately make sense to do so. Vuid runs a large farm of database servers to store all their customers data. Each one of these databases is a container and only one container is run per server. These containers are configured to consume 95% of the resources of the server running it. This probably seems a little counter intuitive  -why not simply install the databases directly on the servers? Repeatability. When Vuid needs a new database server, it can create an exact replica of all their other database containers on another server in about 25 seconds, try doing that on bare metal.

Containers = Repeatability


Containers = better collaboration

By on May 1, 2016 in Blog

Containers are the best thing since sliced bread for collaboration. Well that doesn’t really make sense but I try to make sure these articles are understandable by Container newcomers.

I think Containers they are the best thing for developer collaboration since source control was developed. By having your code and the recipe on how to create your Container stored in files, it makes them extremely easy to share. Because containers are stored in files they are extremely small, I regularly use a Container that is only 188kb in size – you can download it here. To put that into context, I can store 3,700 of these containers on a single DVD.

If you are a developer and you are not using containers, you should. If you don’t want to take my word for it, keep on reading, you will see the light pretty soon.


Containers != microservices

By on May 1, 2016 in Blog

I have never really understood why container virtualization is touted as being about microservices – microservices is a design philosophy that works well with containers, microservices !=containers. Containers are better at providing microservices because container virtualisation has almost no overhead in comparison to old school virtualization.

Because containers have almost no overhead, you can configure them anyway you please without having to worry about wasted resources (overhead). These are some good examples:

  1. one nginx container configured to service 100,000 connections a second and one hundred php-fpm containers built to service 1000 connections each. Configuring things this way allows you to leverage the upper limits of what nginx and php-fpm are capable of. This is what Tredly ContainerGroups are for.
  2. two hundred individual App containers each configured to service 100o connections each (200,000 connections total), instead of four or five virtual machines built to service the same number of connections.
  3. one database container on Tredly-host configured to use 95% of its resources.

You can see from the examples above containers are not about microservices, containers are about choice. Because containers have very little overhead, you have the choice of running 1 container to service 100,000 connections a second OR 100 containers to service 1000 connections a second and anything in between.

Containers = choice – the choice to use containers in any combination you wish, so your code can do exactly what it was designed to do and do it well.