Jira, Atlassian’s development tool for Agile teams, is available in the Cloud as well as in two on-prem versions: Jira Server and Jira Data Center. Jira Server is just that—one server that does everything. Jira Data Center is a multi-server solution designed for both high availability and easy scalability. With multiple Data Center machines running, you don’t have to worry about the entire system crashing if one machine goes down. And if you have too many users for your server configuration, with Jira Data Center you simply add more servers until you have enough capacity for your user load.
Things to think through before setting up Jira Data Center
When designing a Jira Data Center environment there are a number of factors that must be considered. Most relate to one or more of the following: Jira Data Center’s specific requirements; your team’s technical expertise; your available server and storage resources; and service level agreements that you have with your customers.
Here’s what you need to consider:
- Servers – Will you be using physical servers, virtual servers such as VMWare, or a cloud-based service such as Amazon Web Services (AWS)? Your answer to this question will have a significant impact on your Jira Data Center set-up.An important thing to keep in mind here is that licensing on Jira Data Center is based on the number of users, not the number of application servers being used. This encourages the use of a larger number of small application servers rather than a small number of large application servers. This in and of itself is an argument against using physical hardware.
- With physical servers, other than the fact that you’re ignoring Atlassian’s efforts to get you to use a large number of small application servers, the considerations are primarily around cost issues. What will it cost to buy the server, what will the utilization rate of that piece of hardware be, do you have the necessary space available in the data center, how much power will the server consume, etc. Going the physical server route does offer you the greatest performance potential, though.
- In virtual environments all of the server and storage resources are shared. Which means that despite VMware’s best efforts, it’s possible for one system that’s running on the environment to negatively impact others. If you’re going to be using virtual servers it’s therefore important to have a good understanding of your storage, memory and CPU utilization levels and patterns, and plan around this. I also recommend putting some thought into your anti-affinity strategy to ensure that a physical server failure does not cause more than one component of your Jira Data Center environment to go down.
- If you’re using AWS, Atlassian has what’s called “Quick Start Configurations” for Jira Data Center that make set-up a snap. Just click a button, select the number of servers you’ll be using, answer a few questions about the specifics of your configurations, and you’re done. This can be a major plus. However, cloud services can be rather expensive to operate for 24×7 services of this type.
- API users – If you have a lot of API consumers or users, best practices call for designating one or more of the application servers to exclusively handle your API traffic. Doing so segregates your API traffic from the servers that humans use, thus reducing the possibility that spikes in API traffic will negatively impact your human users.Why? Because one of the big problems with the REST API is that Jira does not natively include a throttle. This makes it easy for a small number of API scripts to cause real problems with a server, slowing it down to the point where people find their applications unusable. In addition to jailing off your API traffic on to dedicated nodes, I also recommend implementing sensible rate limiting on your proxy server or load balancer, to ensure that a rogue script does not cause a denial of service attack.
- Storage – Jira Data Center requires shared storage, i.e. a shared storage device that can present a storage volume to all application servers. This is where resources such as attachments are stored and presented to all application servers. Throughput is an important consideration, but it doesn’t tell the whole story. You need to understand the latency for various types of storage operations as well. I recommend using a storage benchmarking tool to measure the overall performance of your shared storage, and not just perform a throughput test. Especially if you’re using NFS storage, it’s advisable to do some testing to determine exactly what your storage latency is.
- Service level agreements – Your service level agreements (SLAs) with your customers will drive the decisions you make when designing the database and load balancer environments. For example, if your SLA allows for a 1 hour outage, keep things simple and use master/slave replication on your database. But if your SLA requires 99.9% uptime, you’ll need to make both the database and the load balancer much more robust, using active/active redundancy, which adds additional complexity to the environment.While Atlassian offers documentation and support around making the Jira Data Center application itself highly available, they don’t offer much guidance regarding making your database and load balancer highly available. This is an area where working with an expert such as Coyote Creek can be a big advantage. We have proven designs using a variety of technologies to suit your requirements.
- Database – There are four database technologies that are supported in Jira Data Center; you must run one of these supported databases. If your team has expertise on one of these supported database platforms, that one would be your best choice. All of the supported database technologies can be clustered to provide high availability.If you’re using AWS, the decision is already made for you. Atlassian supports the use of RDS database (a native AWS variance of Postgres.) Both are set up when you use the Quickstart configuration. Another advantage of using Quickstart is that auto-scaling is set up automatically, and your nodes are distributed in two of Amazon’s Availability Zones.
- Load balancers – Jira Data Center requires the use of at least one load balancer, the “traffic cop” that looks at incoming requests and directs them out to the application servers. Because most organizations that choose to use Jira Data Center instead of Jira Server do so to meet high availability requirements, it is recommended that you always use more than one load balancer, for redundancy. It is important to configure your load balancer to be session sticky. Most load balancers can do this in several ways, and I recommend sticking based on cookies if you have this option. Finally, you need to choose your balancing algorithm. You typically choose between “round robin,” where the former assigns incoming sessions to the next application, or “least loaded,” where the load balancer assigns the incoming session to the application server with the least active sessions. Neither model is perfect, but in testing, we have found that the best overall balancing occurs when you choose “least loaded.” If you are using a more advanced hardware load balancer, it can also be possible to balance based on server CPU load or some other attribute that measures server performance.Once again, if you’re using AWS, your decision is made for you. AWS uses native AWS Elastic Load Balancer, or ELB.
Need help with the architecture and installation of JIRA Data Center? Give Coyote Creek a call. As Atlassian Platinum Experts we have a great deal of experience in this area.