Website
Website Website is a basic unit of information in the Internet.

Website is a single resource of information in the Internet. Usually, it is realized through a set of web pages, devoted to some specific topic. The pages are written in special markup lenguages and may contain various types of data - applications, audio and video files, images etc. The site is hosted on a hardware server and can be found in the Internet by unique name due to DNS. The access to a website usually provided by HTTP (hyper text transfer protocol), or its secure version - HTTPS. This protocol provides possibility to deliver information from the server to a client, who is usually viewing the pages with the help of special application, called browser.

  • CM.Glossary.Uptime
  • CM.Glossary.Website Hosting
  • CM.Glossary.DNS
  • CM.Glossary.HTTP
  • CM.Glossary.WebsiteMonitoring
more glossary
"

Although i'm using the free offering, but it has given me the liberty of alerting system for my landscape

"
- Adam
Host-Tracker under Windows Azure

Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...

​Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.

Architecturally, HostTracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.

The first version of HostTracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of HostTracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, HostTracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.

HostTracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, HostTracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”

It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.

As a result, the second version of HostTracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, HostTracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.
 



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.

 As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize HostTracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like HostTracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.

HostTracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support HostTracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.

more blog
Thank you for feedback!
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog > UptimeWebMonitoring

Website monitoring is a process of supervision over the performance of the site. HostTracker is a powerful tool which serves that purpose.

Website monitoring is a process of supervision over the performance of the site. Usually it is used to keep an eye on commercial sites or other pages for which high availability is very important.

Companies, which provide website monitoring services, let their clients to check a website, a server, port or other available from the Internet entity. The responses are collected and analyzed. Usually, the monitoring is performed from different locations, distributed over the whole world or the specific continent or country. Such monitoring is called distributed monitoringand helps to detect network-related issues as well as site or server-related. Also, distributed monitoring often makes possible to analyze the site performance from the places which are close to the real customers, instead of some distant locations which could have high latencies. The collected information could be found in a different form: email reports, graphs and different smart dependencies, to help the client to get a comprehensive view of the site's performance. Parameters like load time, speed and others can help to optimize the site performance. In case of critical error, monitoring services use variety of notification methods to deliver the alert to the client: SMS, voice call, instant messengers, email and others. This, together with immediate diagnostics, helps responsible persons like server administrators or developers to fix the site as soon as possible and minimize the duration of the failure.

Why is this important? For commercial sites, working time is proportional to their income. Roughly speaking, 2 hours of failure per day means that 1/12 part of the potential clients are lost. Even more - loyal customers also can find a more reliable competitor in case they can not receive their services/goods when they want. For other types of sites, like government, educational, NGO this is very important too. If people can't find the information fast, they'll find another source. Some parameters like site speed are important for search engines, others - database connectivity, for example - can greatly affect the convenience of usage of the site. Monitoring of some internal values like CPU load, memory consumption and disk space are important for administrators in order to prevent performance degradation. Another important purpose is verification of SLA (service-level agreement) of hosting provider. For technical issues, no site can be available 100% of time during long enough period. Sometimes servers are rebooted, updated or upgraded. So each hosting provider guarantee some specific value, which is called uptime, that shows how long can a site be down for technical reasons. Uptime is usually measured in percent. In this table it is shown for how long a site could be down for each uptime value:

  • 90%          876 hours
  • 95%          438 hours
  • 99%          87,5 hours
  • 99.9%       8 hours 45 minutes
  • 99.99%     52,5 minutes
  • 99.999%   5 minutes 15 seconds

If the real site performance does not correspond to SLA, it could be a reason for claim and refund request. Also, it can help the customers to select the best hosting for their needs.

Monitoring companies often provide some additional services, like vulnerability check, virus scanning, domain and certificate expiration check and many others, in order to make a useful product for their customers.

Different ways to monitor

The monitoring, based on the purpose, can be performed in several ways. Internal monitoring tools require some software to be installed into the monitored system, for example, corporate network. It helps to detect network issues, system performance and exclude or catch hardware and performance issues. External monitoring is performed from outside. It's purpose is check the availability and performance of the system by the third-party's eyes. Real user monitoring is external monitoring which simulates the real visitor of a site. Depending on complexity, such monitoring can analyze the page loading, content, sometimes - even design problems. Most advanced monitoring can offer its client to create a scenario for a visitor. This is called transaction monitoring and can perform some steps one-by-one: like load the page, navigate through menu, make a purchase. Passive monitoring is performed by a code which is integrated into the website code and sends specific information to the collector server each time the page is loaded. It helps to analyze customers actions on the site and analyze the traffic.

Monitoring services usually support different check protocols, and can monitor not only the websites, but also other entities like a file servers, mail servers, specific ports etc. Depending on task, monitoring interval can vary from several seconds to once per day.

Share:
Send to Twitter Send to Facebook Send to LinkedIn
Blogs:
HostTracker blog HostTracker page on Facebook