Multiple Nodes
In the documentation of the core-server you will references to multiple servers more than once.
One of our biggest focuses is the be abdle to serve a dynamic app to thousands of user per hour.

When deploying an app to multiple servers, we have to know how the sessions will be managed and how the resources like images will be shared.
If it has not been managed correctly you will have a lot of problems on a large scalled app.

Core-Server has been designed to manage this aspect of scalling and let you scalle your app without any additional modification to as many servers as you want.
Every node will be an exact replication of each other and will share the needed informations between them. We will explain in this section how you can manage the different aspects of your app when scalling.
Sessions Management
When you scale your application to multiple nodes, the sessions data has to be shared between each node.
If the sessions data is not correctly shared between them, the users will be disconnected when they are load balanced to another node.

We have a deep integration of redis into our sessions system.
If you have correcly configured Redis into this section, all the sessions will automaticly been shared between the different nodes with Redis.
Uploaded Files Sharing
In some scenaraio's you will need to let your users upload some files like images.
This files needs to be shared between every node of your cluster, this can be done using an NFS storage or even a beter sollution like GlusterFs. When you mount a network location to a folder, every core-server node will take the files from this storage and keep it in cache in order to have as less as posible read operations.
When a file is updated on this storage, you can purge the cache of the nodes like explained in the advanced api section. This way the core-server will load the latest upload image.

If you're using Kubernetes or another docker clustering system, we will recommend you to create a separated deployment handling the dynamic files.
If you have a lot of dynamic files, the caches of the core-server will grow up very fast and you will need a lot of memory.
When you're using a separated deployment for your dynamic files, you will need less nodes having this files in cache.
Network Tasks
When a single node has a lot of work to execute, sometimes it's better to dispatch his work to different workers.
You can let by example the master send some tasks to each nodes every x secondes in order to make a lot of analyses.
You can see more advanced use cases in the network plugin section.
Kubernetes AutoScalling
With the kubernets autoscalling feature you will be able to handle as many requests as you need without any limits.
By carrefull, every service that you needs to handle the incoming requests needs to be scalled automaticly in order to avoid any problems.
If by example your database could not handle every query that has been send, the core-server will also not be able to handle the incoming requests anymore.
Not only your deployments needs to be scalled automaticly but also your nodes.
You will learn more about clusterisation in the next section.
CDN
Core-Server has also been designed to work in harmony with any CDN like CloudFlare.
the main idea of our UI system is to have something to can been cashed up to 99%, which will decrease dramatically the server costs.
Our caching rules at Cloudflare are "Cache All excepted API's", this way cloudflare delivred up to 1TB of data from his cache for us.
When you want to display an image that could be updated like profile pictures, it will be nice to add to hash of the image to the image url.
In the API's you can use the resouce_url function in order to add the hash of the image to the requested path.
This way you CDN's will load the latest version of your image when it has been changed. The hash of the image is calculated only once and can be purged like explained in the api advanced section.
Load Balancer
Once your services has been scalled to multiple nodes or pods, you will need to load balance the traffic between the different processes.
With a good loadbalance you will be able to provide an 100% SLA to any customer.

Load balancing can be done on multiple levels like DNS or on the http requests of your cluster.
Internally core-server will automaticly load balance between the different databases if you are using percona.

We recommend to setup multiple clusters in multiple regions. When a cluster will be done it will not be a problem because the other clusters are still running.
Cloudflare can loadbalance on the DNS level between your different clusters. Then the incomming traffic can be load balanced between the different processes.

Next steps
Components are now no mistery to you anymore.
The next step in the guide will show the use of resources being static or uploaded files.