Website monitoring
Website monitoring Website monitoring is an automated process of checking availability of a site.

Website monitoring is an automated process of checking availability of a site. The main goal of it is evaluation of possibility to access the site by clients. It is clear, that a site is efficient when an interested person can load the page and make a purchase or look for some information. If this action fails for some reason - the site does not execute its mission, and a client will find what he needs somewhere else.

There are many solutions of this problems, and all of them could be divided into passive and active ways. The result of monitoring is the value of uptime, measured with some accuracy. Having it, one can conclude how long is the site broken during some period of time (usually, for a year). Low uptime usually means that the server where the site is hosted, or internet connection to it, is unreliable and is required to be changed.

  • CM.Glossary.Uptime
  • CM.Glossary.Downtime
  • CM.Glossary.ActiveMonitoring
  • CM.Glossary.PassiveMonitoring
  • CM.Glossary.Availability
more glossary
"Your sites is very nice and is the best service for web designers. thanks a lot."
- Del.
Host-Tracker under Windows Azure
Those, who actively involved with the Web, should know Host Tracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, Host Tracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...
Those, who actively involved with the Web, should know Host Tracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, Host Tracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.
Architecturally, Host Tracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.
The first version of Host Tracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of Host Tracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, Host Tracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.
Host Tracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, Host Tracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”
It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.
As a result, the second version of Host Tracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, Host Tracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.
As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize Host Tracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like Host Tracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.  
Host Tracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support Host Tracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.
more blog
 
Feedback
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Feedback