13 Expert Tips to Improve Your Web Application Performance Today
You've used your budget on social media, news and preparations for what is set to be the biggest campaign of the year. ..just to see your website crumble under the stress of thousands of concurrent users.
They try to access your site but run into long delays and dreaded HTTP error codes.
Looks familiar? If that hasn't happened to your own site or app, you've probably read plenty of examples in the media.
It 'sa terrible feeling to be a victim of your own success, to see your infrastructure fail just when you are about to reap the benefits. bigger rewards.
The more you addThe more load a system is, the more likely it is to fail. It is just a fact of computing. At some point, you may need to add servers, overhaul your application architecture, or move to a new, easier-to-scale application infrastructure.
That said, you can take steps now to improve the performance of your existing setup . Some you may already have, but others could be overlooked.
At Queue-it, we've seen our fair share of websites that couldn't take the pressure of thousands of simultaneous visitors. Some of these websites were poorly designed and poorly constructed. Others, after spending millions on performance, still cannot keep up with demand.
Here are 13 tips based on years of experience from the Queue-it product manager Martin Larsen , a web performance expert.
1. Using (& optimizing) a
content delivery network (CDN)
In 2021, using a CDN should be obvious. If you don't have a content delivery network (CDN) in front of your site, this is an obvious first step.
Moving all of your static resources to CDN is by far the easiest way to change your server performance a bit more.
A CDN improves performance by caching (or saving) static content such as images at the CDN level, so these assets can only be loaded. 'once, then broadcast to hundreds or thousands of visitors. Serve static files from your eloig web serverdo the clock cycles, bandwidth, and threads of what your web server should do: serve dynamic content.
Caching is one of the most efficient and inexpensive ways to improve scalability and performance. It can be implemented on your infrastructure, in your application or at the data level. Each has its advantages, but the infrastructure level is probably where you will see the greatest rewards for the least effort.
Depending on your configuration, you may load CDN content from another domain (eg assets.website.com), which requires configuration in your application. But nowadays CDNs allow you to put the CDN in front of your entire website and then give you the option to configure rules to determine what should be loaded from the CDN and what should be loaded from. fromyour web servers.
Originally, CDNs simply helped serve static content. But today, they do much more, from dynamic content caching and routing to DDoS protection. Some, like Akamai EdgeWorkers or Cloudflare Workers , even offer 'run code.
Overall this means that it is possible to offload more and more of your website onto the infrastructure of CDN providers, which reduces the pressure on your servers.
2. Applying the right tool to the job
Contrary to popular belief of recent decades, the relational database is not a Swiss Army knife.
The high prices of data storage and software licenses have led developers and architects to abuse the database.relational data to save images or temporary data like session state.
Relational databases are the most common reason we see web sites fail to scale. Few developers or architects seem to realize that there are other ways to keep your data - NoSQL databases, blob storage , message queues , push notification and browser storage to name a few.
A relational database is a powerful tool. But it is just one of many.
To achieve a scalable system, you have to use many tools, not just one. You will end up with a set of tools and your data can be distributed and replicated between them.
We realize this seems a little scary.
You will need to have extensive knowledge and experience of different tools and technologies to choose the right ones. Your application will need to support the ones you have chosen. , you need to run it in a production environment.
But it looks scarier than it is. 'a relational database is more difficult.
3. Build elasticity into the application
Web performance does not concernIt's not just the hardware, the tools and the algorithms. It's also about how you chose to design and build your app.
When it comes to elasticity in scaling your servers, horizontal scaling is generally preferred over the vertical. This means that you should prioritize adding smaller servers as you scale rather than replacing servers with larger servers.
This is vital if your application is running 'running in the cloud. When you are scaling horizontally, your application may need some support eg. easily add new servers with bootstrap and serve the same user from multiple servers, to name a few. Even with the updateautomatic scaling, server scaling is complex.
Load balancing allows you to optimize your server resources by distributing requests over several servers. A cloud-based load balancer makes the decision at the edge of the network, closer to the users, allowing you to increase response time and efficiently optimize your infrastructure while minimizing the risk of network failure. server. Even if a single server fails, the load balancer can redirect and redistribute traffic between the remaining servers, ensuring that clients do not experience high latency or see a site outage.
With horizontal scaling and load balancing, you get an easy and inexpensive application to scale.
To highlight the savings, you can think of two cars, onehe chic sports car and a modest sedan. Let's say the Ferrari 812 Superfast and the Ford Focus.
It is much easier to buy a Ford Focus than a Ferrari. The supply is high, the delivery time is short, and if you are based in the United States, it does not need to be imported from overseas.
In addition, it is easier to repair (almost any auto store can fix it) and it can easily be replaced, even with another model or a other brand with similar specifications. And it 's cheaper.
Similarly, scaling horizontally with many smaller servers means more flexibility and lower costs, both in terms of purchasing , repair and replacement.
4. Running code on the edge
Many of these tips involve ways to reduce the loade work on your servers, and this one is no different. Serverless technology that allows you to run code at the edge of the network is an effective way to take the load off your web servers.
L Edge computing leverages the many edge servers in a CDN. These servers are spread around the world, originally with the goal of reducing latency and bandwidth usage.
The genius of edge computing lies in adapting this network of servers to another platform on which you can run your code, without necessarily involving your original web servers.
So what kind of asset or code do you want to run on the edge? Ideally, you'll want to move things that don't require status. In other words, it doesn't need to reconnect to your serverWeb to run correctly.
To take an example of e-commerce, let's say that every time a visitor logs into their account on your site, you display a personalized list of 'articles that might interest him. To show it, you need to access a historical overview of the purchase history, maybe refreshed once a day. That's it. It doesn't need to be updated in real time.
But creating the list still requires CPU power and an algorithm. It would be a great candidate for offloading to operate on the outskirts.
The more you can minimize requests to your web servers, the more it frees up those resources to do the critical tasks that you really need them — and the better you are able to. exit from your web application.
5. Toggle functionalites
Despite your efforts to maximize performance, you will inevitably find yourself in a situation of shortage of resources. In this case, it is better to have a responsive and basic website than one that is unavailable and full of features.
You've probably spent years tinkering with and customizing your site to make it just the way you want it to. There's always something to optimize, and you've added some cute bells and whistles along the way. These can be performance intensive and they can add up.
Think about your search function. When a person searches for a product, an advanced search function should find all items that match the search term (s), taking into account spelling errors, product categories and features, etc. It is very CPU intensive.
Create your application with Ops Toggles so that performance-intensive features are disabled when resources are low. Primary targets for fancy "nice" features are advanced search functionality or "Recommended for you" signs.
Yes, the user experience will be degraded. But even giants like Amazon are doing it, like when the site was overwhelmed with demand on Prime Day in 2018, and the company implemented a reduced" fallback "homepage . If Amazon uses feature failover, you can do that as well.
6. Share your resourceses
Sharding is the process of
A common example is to partition by geography. If you serve users worldwide, you can configure three identical systems to serve users in America, Europe, and Asia / Pacific rather than having one system to serve them all. Each of the three systems is easier to scale compared to a single huge system. And, if one system is down or unreliable, it will not affect both others.res systems. Assuming the load is evenly distributed, the result is that your overall reliability is improved by at least a factor of 3.
Depending on your application, another approach may be to fragment a key data attribute, like a user id or event id if you are selling something like tickets. This way, you can distribute the load more evenly across your partitions. You can even take advantage of advanced techniques like shuffle sharding to further increase scalability and reliability.
If your bottleneck is in the data layer, you can apply the same techniques to databases so that the data resides in different tables in different databases physical data. The distriBuilding data in this way means that a lack of resources on one database server will not affect users running on other database servers.
There is several data models to achieve this, so that you can choose the method that best suits your application's needs.
7. Minimize latency
Many web servers are limited by the number of concurrent threads. Having long requests will reduce the number of requests the web server can handle in a given time. Performance is exhausted when web servers start blocking requests when the thread pool is exhausted.
Be sure to monitor request latency and take corrective action.measures to reduce it. High latency is typically caused by code blocking in your application, such as database queries, transactions, poorly written algorithms, and network latency. Latency is likely to increase over time as more data accumulates and the number of concurrent requests increases.
Modern programming languages allow the application to reuse threads when they wait for I / O. The use of this hrone programming model is essential in cases where the application makes requests to dependencies such as databases.
Use caching whenever possible - in many cases application logic does not need fully updated data, and for these cases, the latency can be drastically reduced when adding a cache layer.
Make sure you only create transactions and use locking when you absolutely must. In most cases, there is a way to use them and it will significantly reduce latency.
Finally, you can change the underlying tool to one with faster response time. If you are working with transient data, use a memory database rather than a SQL database.
8. Learn to use a
Finding a performance bottleneck is a bit like finding a needle in a haystack. Yet most developers will start looking at their code to find it.
What they should do is look at the data.
Most applications are black boxes. You put something thing, you wait and you obhold a result. When apps start to malfunction, it's hard to see why.
Is it a lack of thread? An unoptimized SQL query? Exponential complexity of inefficient code and algorithms? Network latency? The possible causes are almost endless - so where do you start?
A higher performance profiler will collect data and visualize in a way that makes it easier to locate bottlenecks. You can usually explore the CPU and memory usage and will tell you the exact piece of code causing the problem.
Perfo Application rmance Monitoring (APM) is becoming increasingly important with modern distributed applications and microservices architectures. APM tools will allow you to explore the relationship and latency between services and datastores, allowing you toIdentify exactly which component is causing the problem and why.
There is a bit of a learning curve and setup takes a while, but it will be refunded multiple times as performance issues are resolved.
9. Run load tests
Even with great technical architecture, brilliant developers, and great infrastructure, your application will always have limits. You probably have some type of distributed system. Network, latency, multithreading, etc. introduce a series of new error sources that will limit the scalability of your application.
It is essential that you are aware of these limitations and that you prepare for them before the error occurs in production.
Yet when we ask customers how many visitorsIf their website can handle it, they usually have no idea. Or, they expect to be able to handle a lot more than they actually can. While it is important to answer a question, it often reveals that they are not performing any type of load test .
In recent years it has become easy and inexpensive to run load tests with open source tools like Apache JMeter and Gatling when they are run in the cloud using a service like RedLine13.
If you are planning a high traffic sale or inion, it is essential to start the load testing process early. It takes a lot of planning. You don't knowDo not see what you are going to find either. And that means you don't know how long it will take to implement changes.
All of this can create tension. On the one hand, you need to test as soon as possible. On the other hand, there is no point in testing something that looks nothing like the finished article. So, you might find that you need to prepare your landing pages and key journeys much sooner than you expect.
Also remember that new code can potentially introduce new limitations, so it's important to run your load tests regularly.
ten. Keep visitor levels where your
site or application performs best
You know the level of performance you expect to deliver to your visitors. And if you've done your load tests, you know what kind of traffic your ifte or your app can handle.
But if you receive an increase in the number of website or application visitors, you are unlikely to be able to deliver the performance your customers expect. . As the saying goes, “on a large scale, everything shatters”.
Websites and applications are built with assumptions about the traffic they normally handle. Making a site scalable on demand is technically difficult and can be costly. Every website has limits, as anyone who reads the news knows, even the world's largest companies fail under a heavy load.
To maintain visitor levels where your website and application are working, you need to manage the influx of traffic.
Some web traffic management strategies include rate limiting, coordinationmarketing campaigns and the use of a virtual waiting room.
11. Note that the browser is a powerful
Your server capacity is usually quite limited, either by physical hardware or cost. In contrast, the power of end-user browsers depends on the number of users.
As we rethink the way we build our websites, our servers will be freed up to focus on data access and security, while HTML templates and some data access can be managed by cache and CDN servers.
12. Rethink Business Rules
In many situations, your performance bottlenecks are complex business rules that require you to create complex code. In other cases, it is a third-party service, such as a payment gateway, which limits performance.
Often these business rules are modeled on the processes and transactions that exist in the physical world. For example, an e-commerce site will typically process synchronous authorization on a credit card used in a sale the same way your local convenience store does. Maintenant, a heavy dependence between the website and the payment processor, so the performance of the website is now limited by the scalability and availability of the payment processor.
But is it really necessary?
In a scalable system, the services are autonomous and use human workflows in a target finally coherent architecture .
It is worth reviewing these business rules and trying to simplify them.
Should the item be taken from the inventory when it is added to the cart, or is it okay to do so hrone when ordering is finished ? Can we complete orders without authorizing credit cards?
Often, decision-makers will change the rules once they have seen theconsequences.
13. Ask: What will I gain from increased performance?
Like any project, the "Why? " Should always come before the "How".
In other words, before you spend a lot of time and money redesigning your system to handle more visitors, don't forget to consider your business.
If your business needs to sell limited stock to a large audience, like a sneaker outlet or popular concert tickets, there is no reason to spend resources on a system capable of handling all users at once.
In such a situation, it is good if your site has a large capacity. But if you open the floodgates and allow a lot more people than there are tickets, you put users in a position of disappointment.
Sommand ng up
Designing and building a scalable website is not easy. The infinitely scalable website will always be out of reach.
But, if you use a combination of the right technical tools and business processes, you yourself will have an exemplary and successful website.
If you have followed these 13 essential steps, you will be on the right track.
The missing guide to building large-scale mobile apps