Uptime
Uptime Uptime is the period of time when a site performs well.

Uptime corresponds to the time when a site is accessible from the Internet. The opposite term - downtime - shows for how long a site has not been working during specified period of time. Usually uptime is measured in percents, and for period of time is choosen year. Percents over the year could be easily transformed into time values. Some typical values of uptime and corresponding period of unavailability during the year are shown here:

90% - 876 hours

99% - 87 hours, 36 minutes

99.9% - 8 hours, 45 minutes, 36 seconds

99.99% - 52 minutes, 34 seconds

So high uptime is really important. Even if it seems that 99% is pretty high value - it corresponds to several days of failure. If that happens in a row, many clients can be lost. Uptime value is usually guaranteed by web hosting, where the site is hosted. Website Monitoring may help you to increase the uptime and check if the value, declared by the hosting company, is real.

  • CM.Glossary.WebsiteMonitoring
  • CM.Glossary.Downtime
  • CM.Glossary.WebHosting
  • CM.Glossary.Availability
more glossary
"Really amazing Service, congratulations."
- W.
Host-Tracker under Windows Azure

Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...

​Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.

Architecturally, HostTracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.

The first version of HostTracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of HostTracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, HostTracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.

HostTracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, HostTracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”

It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.

As a result, the second version of HostTracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, HostTracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.
 



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.

 As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize HostTracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like HostTracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.

HostTracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support HostTracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.

more blog
Thank you for feedback!
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog > Improve_site_uptime

It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance...

It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance. Numerous techniques may be used to make access to the website smooth, thus increasing uptime. Site uptime informer

Here we offer you twelve methods recommended to achieve performance gain and uptime increase. Software optimization is required for that, as well as hardware optimization. A lot of software features may be upgraded with the use of general and improved coding standards, operated by the website manager. At the same time the company providing the webhosting services needs to improve the hardware constantly.
You can measure the accessibility of the website using monitoring services HostTracker.

Software Optimization
The following are the first six ways to improve uptime:

  • Split the databases
  • Separate the read an write databases
  • Use popular content caching more often and improve its quality
  • Optimize static content
  • Ensure compressed delivery of content
  • Optimize the content management system


To begin with, splitting  the databases – horizontally or vertically, or combining these two directions - is essential, making the connection more reliable. It is also good to differentiate the read and write databases, this giving an opportunity for master/slave setup. These actions are useful to extend the database infrastructure for future use.

Second, when you set the system to use better popular content cache, and to use it more often, your site will scale up more easily with many users operating it. Internet caching has no difference from computer caching: it supposes storing popular content at a separate container, allowing much quicker access to the information for the users.

To optimize static content is one more way to make the access to internet pages and files quicker. One of the means for it is to compress the images to the maximum extent possible (but, of course, preserving their high quality). Moreover, it is necessary to check if the web server may be used to deliver compressed content; this characteristic is not connected with the images, as they are already compressed files. Take care to have all the appropriate settings from the beginning.
.
One more appropriate thing to do is to improve the system of content management reducing the database calls number for each page request. It is like any type of connection: at the decrease of the information sending times the connection is maintained. In this circumstances, the number of calls to the database should be as low as possible – it will ensure that the users are able to access the content with the greatest speed.

Software and Hardware Optimization
The next six methods for augmenting uptime are:
Six more ways to augment the uptime are the following:

  • Use content delivery networks
  • Use emerging standards, such as HTML5
  • Improve the programming techniques
  • Add the content “expires” headers
  • Lessen the number of HTTP requests
  • Use Ethernet connections, allowing for more speed


Contend delivery networks allow for operation over larger amounts of media, at the same time improving the performance of the site. They are developed to direct the traffic to private networks. Such services allow operation over large media files, rooting the traffic along the Internet age, not straightly, averting extreme overloading. As the content delivery networks operate with large files, this unloads the servers, providing quick and qualitative connection.

Emerging standards, such as HTML5, include systems, improving the websites. It is achieved through advanced programming techniques, aimed at website and internet communications. Such standards do not 100 % assure that the website that uses them will not go down, but the in-built mechanisms of the code will act as the auxiliary mechanisms if it happens. Moreover, you should use improved programming methods while working with large loads and traffic spike.

“Expiers” content headers make all the automatically downloaded files cacheable for the visitors. So adding these headers to the content you will prevent the constant downloading of the browser. It blocks page review pointless HTTP requests. As well as the reduction of the database calls number, the reduction of the HTTP requests number will make the connection speed stable and not overloaded.
The last method is to develop Ethernet connection speeds. This will allow to cope with larger files and unexpected traffic spikes. A lot of hosting providers consider it as an excellent investment.

Speeds improvement and downtime reduction make the customers happier. Many of these methods to improve the web server or web site uptime may be achieved in a few minor steps. Such tactics, including Database rearrangement software optimization, caching improvements, content compressing, use of content delivery networks and content management systems, programming practices improvements and hardware upgrading will improve the web host as well as web site uptime, thus also improving your business.

 

 

 

 

 

 

Share:
Send to Twitter Send to Facebook Send to LinkedIn
Blogs:
HostTracker blog HostTracker page on Facebook