Database Monitoring
Database monitoring Database monitoring - DB check for access and regular execution of the specified query.

Database monitoring feature allows to run a query during every check and react in the appropriate way on the result. Also, it is possible to check just possibility to connect to DB - by ignoring the query field. To set the monitoring, fill the connection data: DB server address, port, database name, login and password of a user for connection. We strongly recommend to create the new user with limited rights. However, do not forget to provide him enough rights for performing the supposed actions. Also, it is necessary to add HostTracker servers addresses, to the whitelists on firewall or other blocking software, to allow the access. The addresses are permanent, and are listed on the same form.

Create ContentCheck Task

There could be arbitrary query - SELECT, UPDATE, DELETE, INSERT, execution of stored procedures (like scheduler), results comparison, logical operations. The only restriction is execution time - it should not take longer than 30 seconds. Otherwise, the timeout error will be reported.

It is recommended to create the queries which display the necessary value in the first row of the first column. This result can be analysed. For UPDATE, INSERT, DELETE queries the number of affected rows is analysed. There are different ways to analyse the resulted value by comparison with specific preset - equal/not equal/higher/lower/in range. In case the condition is not satisfied, no connection to DB, query timeout - the error is reported.

Create ContentCheck Task
  • CM.Glossary.WebsiteMonitoring
  • CM.Glossary.ContentCheck
more glossary
"

Very usefull service, helps us to monitor activity of our sites.

"
- A.
HostTracker — Sites Monitoring Service
Every site owner knows how important it is to have your resource always operative and available for visitors. Periodical site unavailability has a bad influence even at its positions in the search engines (as Google representatives have repeatedly claimed), not to mention the fact that visitors are extremely displeased by such “accidents”...

How to be always aware of how properly your site is operated in real time?

HostTracker is an online website monitoring service, having rich functionality and good feedback of the customers, who are, by the way, more than 25 thousand (including the resources of such companies as Colgate, KasperskyLab, Panasonic etc.).

The job design of this service is the following: at certain specified intervals your websites and servers will be checked for availability and serviceability (the verification is performed from around the world), and in case of any problems you will receive notifications about the problems. In addition, uptime statistics for each site is collected, but it will be discussed later.

A quick site serviceability check may be performed directly from the service main page through typing its address in the upper right window and selecting one of the verification methods:

  • Http – website pages loading check
  • Ping – the server availability check (pinging)
  • Trace – the server availability check (tracing)
  • Port – random server port check

According to the check results a detailed report will be created on a separate page.

To become the HostTracker service client and verify the uptime and serviceability of your resources on a regular basis you need to register in the system and assign the appropriate tasks. Over time detailed statistical reports and visual graphics will be created for each task, greatly facilitating data analysis, outputing them in an understandable representative form.

Site failures and problems may be reported to (at your option):

  • Email as a letter
  • telephone as an SMS or a call
  • Skype or GTalk as a message

At that it is possible to set the notifications delay, their uploading time etc. – the settings are very flexible. By the way now I really miss such functionality at Monitorus, as it sends an SMS immediately when a problem occurs, regardless of the time of day, and as you can imagine, late night may be very fun because of this).

Another useful feature of the HostTracker service is the site content monitoring. You may set a list of the page keywords, at the disappearance of which you will receive the notification. Or vice versa – set a list of words, at the appearance of which you will receive the notification. The first case, for example, is good for paid links on other sites monitoring, and the second – to prevent virus activity in the form of additional code embedding on your website. Say, it is very helpful and very convenient!

Now a few words about the HostTracker site monitoring service rates prices. There are several service plans, differing in the maximum tasks number, check frequency, etc. However, there is also a full-featured trial period of 30 days, as well as the Free tariff, within which you may monitor up to two resources with an interval of 30 minutes. That’s not much of course, but as an option to try out all the features for free, and then decide on the appropriateness of the paid tariff transition, it is just the job. Good luck!

more blog
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog
How to activate API for your Host-Tracker account

HostTracker API uses widespread XML and JSON formats. Interactions with API are made by HTTP methods GET, POST, PUT, DELETE, PATCH.

To activate API for your HostTracker account, one should write a request to ht2support@host-tracker.com, with the login specified.

Host-tracker API

API stands for "application programming interface". It is one of the features for HostTracker website monitoring service, which is implemented in preset list of HTTP requests and HTTP responses for maintaining and adjusting HostTracker features for your convenience. It provides possibility to develop applications which will automatically cooperate with HostTracker service, instead of manual adjustments via user interface.
HostTracker API uses widespread XML and JSON formats. Interactions with API are made by HTTP methods GET, POST, PUT, DELETE, PATCH.

To activate API for your HostTracker account, one should write a request to ht2support@host-tracker.com, with the login specified.

REST API description:

https://www.host-tracker.com/api/web/help.html

SOAP API description:

www.host-tracker.com/api/soap/v1/help.html

more
Tags: api
Escalations. Typical scenarios

As many people know, HostTracker is a sites efficiency monitoring system. One of its main functions is to notify the user of any problems promptly. The efficiency of the notifications and the acceptable level of “detalization” are important. If you send alerts at each “sneeze”, the person will not find the important information in this flow...

I was woken up by an SMS at three a.m.
My site dropped for three minutes, and it raised back itself.
But I could not go back to sleep.
True-life story

As many people know, HostTracker is a sites efficiency monitoring system. One of its main functions is to notify the user of any problems promptly. The efficiency of the notifications and the acceptable level of “detalization” are important. If you send alerts at each “sneeze”, the person will not find the important information in this flow.

We have provided several mechanisms that will help the right people to get the necessary notifications:

  • Separation of the notifications into several groups according to their criticality;
  • No notifications at short-term failures;
  • Report the problem to the manager promptly;
  • Report a prolonged failure to the administration;
  • Use the free alerts first – email, gtalk, and then the paid ones – SMS or phone call;
  • At the contact level – set the working time when this contact should receive the alerts.

 

There are three types of notifications:

 

  • The website has “dropped”;
  • The website is still “down”;
  • The website “rose

 

The “dropped” and “rose” are clear. The notifications “site is still down” are sent at each test fail, but only at the confirmed drops. The fails confirmation algorithm was described in the article “False alerts exclusion”

 

For each site-contact pair you may enable or disable the appropriate notification type. The setting can be located in the contact properties as well as in the general “matrix” at the “Notifications subscribtion” page.

Escalation and the notifications detalization level.

Suppose, two people are responsible for the site:

  • Administrator
  • Manager

 

Let's try to implement the following scenario:

 

  • In the event of a “drop” we want to send an email message to the administrator immediately;
  • If the site does not rise within 15 minutes, we send an SMS to the administrator;
  • If the site is “down” for more than an hour, then we send an SMS to the manager.

 

Adding the contacts for the users. While adding, draw attention to the “Notification Delay” window.

 

We appear to have three contacts with the following delays:

  • Administrator (email) – no delay;
  • Administrator (SMS) – 15 minutes delay;
  • Manager (SMS) – 1 hour delay.

 

According to this configuration the administrator will get all the failures notifications to the email, but SMS notifications will be sent only if the site is “down” for more then 15 minutes. The manager will receive only SMS about major failures lasting more than an hour. Setting up the contact working schedule

 

Suppose that one administrator can not cope, and we hired one more administrator. The first one works during the first half of the week, the second one works during the second half. Accordingly the notifications should be sent to the administrator “on duty” To set this scenario the window “Set the contact working hours” is used in the contact settings.

In this case the first administrator will receive the SMS notifications from Monday to Thursday inclusive. Additionally, you may divide the notification for different employees according to the time of day, for example appointing day and night administrators.

Conclusions: with the help of relatively simple mechanisms we may cover most notifications fine-tune user scenarios.

more
Host-Tracker under Windows Azure
Those, who actively involved with the Web, should know Host Tracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, Host Tracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...
Those, who actively involved with the Web, should know Host Tracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, Host Tracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.
Architecturally, Host Tracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.
The first version of Host Tracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of Host Tracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, Host Tracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.
Host Tracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, Host Tracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”
It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.
As a result, the second version of Host Tracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, Host Tracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.
As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize Host Tracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like Host Tracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.  
Host Tracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support Host Tracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.
more
HostTracker — Sites Monitoring Service
Every site owner knows how important it is to have your resource always operative and available for visitors. Periodical site unavailability has a bad influence even at its positions in the search engines (as Google representatives have repeatedly claimed), not to mention the fact that visitors are extremely displeased by such “accidents”...

How to be always aware of how properly your site is operated in real time?

HostTracker is an online website monitoring service, having rich functionality and good feedback of the customers, who are, by the way, more than 25 thousand (including the resources of such companies as Colgate, KasperskyLab, Panasonic etc.).

The job design of this service is the following: at certain specified intervals your websites and servers will be checked for availability and serviceability (the verification is performed from around the world), and in case of any problems you will receive notifications about the problems. In addition, uptime statistics for each site is collected, but it will be discussed later.

A quick site serviceability check may be performed directly from the service main page through typing its address in the upper right window and selecting one of the verification methods:

  • Http – website pages loading check
  • Ping – the server availability check (pinging)
  • Trace – the server availability check (tracing)
  • Port – random server port check

According to the check results a detailed report will be created on a separate page.

To become the HostTracker service client and verify the uptime and serviceability of your resources on a regular basis you need to register in the system and assign the appropriate tasks. Over time detailed statistical reports and visual graphics will be created for each task, greatly facilitating data analysis, outputing them in an understandable representative form.

Site failures and problems may be reported to (at your option):

  • Email as a letter
  • telephone as an SMS or a call
  • Skype or GTalk as a message

At that it is possible to set the notifications delay, their uploading time etc. – the settings are very flexible. By the way now I really miss such functionality at Monitorus, as it sends an SMS immediately when a problem occurs, regardless of the time of day, and as you can imagine, late night may be very fun because of this).

Another useful feature of the HostTracker service is the site content monitoring. You may set a list of the page keywords, at the disappearance of which you will receive the notification. Or vice versa – set a list of words, at the appearance of which you will receive the notification. The first case, for example, is good for paid links on other sites monitoring, and the second – to prevent virus activity in the form of additional code embedding on your website. Say, it is very helpful and very convenient!

Now a few words about the HostTracker site monitoring service rates prices. There are several service plans, differing in the maximum tasks number, check frequency, etc. However, there is also a full-featured trial period of 30 days, as well as the Free tariff, within which you may monitor up to two resources with an interval of 30 minutes. That’s not much of course, but as an option to try out all the features for free, and then decide on the appropriateness of the paid tariff transition, it is just the job. Good luck!

more
Twelve Ways to Improve Uptime Website
It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance...
It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance. Numerous techniques may be used to make access to the website smooth, thus increasing uptime. Site uptime informer

Here we offer you twelve methods recommended to achieve performance gain and uptime increase. Software optimization is required for that, as well as hardware optimization. A lot of software features may be upgraded with the use of general and improved coding standards, operated by the website manager. At the same time the company providing the webhosting services needs to improve the hardware constantly.
You can measure the accessibility of the website using monitoring services Host-Tracker.

Software Optimization
The following are the first six ways to improve uptime:

  • Split the databases
  • Separate the read an write databases
  • Use popular content caching more often and improve its quality
  • Optimize static content
  • Ensure compressed delivery of content
  • Optimize the content management system

To begin with, splitting  the databases – horizontally or vertically, or combining these two directions - is essential, making the connection more reliable. It is also good to differentiate the read and write databases, this giving an opportunity for master/slave setup. These actions are useful to extend the database infrastructure for future use.

Second, when you set the system to use better popular content cache, and to use it more often, your site will scale up more easily with many users operating it. Internet caching has no difference from computer caching: it supposes storing popular content at a separate container, allowing much quicker access to the information for the users.

To optimize static content is one more way to make the access to internet pages and files quicker. One of the means for it is to compress the images to the maximum extent possible (but, of course, preserving their high quality). Moreover, it is necessary to check if the web server may be used to deliver compressed content; this characteristic is not connected with the images, as they are already compressed files. Take care to have all the appropriate settings from the beginning.
.
One more appropriate thing to do is to improve the system of content management reducing the database calls number for each page request. It is like any type of connection: at the decrease of the information sending times the connection is maintained. In this circumstances, the number of calls to the database should be as low as possible – it will ensure that the users are able to access the content with the greatest speed.

Software and Hardware Optimization
The next six methods for augmenting uptime are:
Six more ways to augment the uptime are the following:

  • Use content delivery networks
  • Use emerging standards, such as HTML5
  • Improve the programming techniques
  • Add the content “expires” headers
  • Lessen the number of HTTP requests
  • Use Ethernet connections, allowing for more speed

Contend delivery networks allow for operation over larger amounts of media, at the same time improving the performance of the site. They are developed to direct the traffic to private networks. Such services allow operation over large media files, rooting the traffic along the Internet age, not straightly, averting extreme overloading. As the content delivery networks operate with large files, this unloads the servers, providing quick and qualitative connection.

Emerging standards, such as HTML5, include systems, improving the websites. It is achieved through advanced programming techniques, aimed at website and internet communications. Such standards do not 100 % assure that the website that uses them will not go down, but the in-built mechanisms of the code will act as the auxiliary mechanisms if it happens. Moreover, you should use improved programming methods while working with large loads and traffic spike.

“Expiers” content headers make all the automatically downloaded files cacheable for the visitors. So adding these headers to the content you will prevent the constant downloading of the browser. It blocks page review pointless HTTP requests. As well as the reduction of the database calls number, the reduction of the HTTP requests number will make the connection speed stable and not overloaded.
The last method is to develop Ethernet connection speeds. This will allow to cope with larger files and unexpected traffic spikes. A lot of hosting providers consider it as an excellent investment.

Speeds improvement and downtime reduction make the customers happier. Many of these methods to improve the web server or web site uptime may be achieved in a few minor steps. Such tactics, including Database rearrangement software optimization, caching improvements, content compressing, use of content delivery networks and content management systems, programming practices improvements and hardware upgrading will improve the web host as well as web site uptime, thus also improving your business.

more
Shellshock vulnerability online check
Considering the recently discovered Shellshock vulnerability, HostTracker has created a tool for testing it.
Check your server for vulnerability

How does it work?

It is developed for a Linux server with a web server installed on it. The algorithm is very simple. We consequently generate 4 http requests:

  • 1. Ordinary request
  • 2. The request tries, using vulneratility, post a "harmful" cookie which causes 2-seconds delay in respond to our special http request.
  • 3. The request tries, using vulneratility, post a "harmful" cookie which causes 4-seconds delay in respond to our special http request.
  • 4. Same as #3

Results of the test

How to understand the result?

We compare response time for all 4 requests. Three situation are possible:

  • 1. Vulnerability found. We may affirm that if the difference in responses is about 2 seconds for requests without cookie and with 2-second-delay cookie, as well as for requests with 2 and 4-second delay cookie. It means that our request was able to use the vulnerability and set these cookies.
  • 2. Vulnerability not found. All the requests have about the same response time. The cookies, likely, were not installed because there is no vulnerability.
  • 3. Uncertain situation. If the response time differs widely, without coincidence with preset by cookies delay, we can not say for sure. It could be if the server is under high load. To check this, we use two requests with same cookies (#3 and #4). If the response time for two same checks varies, we make a conclusion that the response time is not affected by cookies. At least, not only by them. So in this case our method can not detect vulnerability

Safety of checks

Our test can not damage your server. The risk consists of appearance of one extra-cookie, which is used only for our requests and can not affect normal work-flow of your site.

more
snapshot - instrument for site supervision

How does the site look like when I’m not looking at it? What if it looks bad or does not work at all? HostTracker offers an instrument for site supervision - snapshot feature. Let’s take a look at its practical application.

How does the site look like when I’m not looking at it? What if it looks bad or does not work at all? HostTracker offers an instrument for site supervision - snapshot feature. Let’s take a look at its practical application.

What’s going on with my site?

Now it’s usual to use different services and applications for site maintenance and support, and sometimes they do report some problems. Often we feel the lack of information - Google Analytics or a similar service reports the downtime and renewal, but likely we will never know what exactly has happened. To investigate the issue, it is necessary to review the logs, write to hosting support and perform many others exhausting actions, frequently - with no result. There are also more interesting cases - when a site is not available from a certain country or is not downloaded completely. Such problems could long for months, or even years, till they are accidentally detected. One more important issue - content check. It will automatically review the content of the site and informed the responsible staff in case it has disappeared - for example, something has not been able to be downloaded from the database. But it’s hard to find the cause if the issue is short-term, because people usually do not sit in front of a laptop refreshing the page every minute. To resolve the problem, HostTracker offers a new feature - snapshot. It is very simple in use and does not require any additional adjustments. The service simply makes a snapshot of the checked page every time and saves it for review in two ways: page source code and html-view. This let you easily see how the page looks at the moment of failure, understand what’s wrong and fix the problem quickly without spending time for diagnostics. It saves lots of time for server administrator, developers and other concerned people.

How does it work

Doing the regular checks, our servers with predefined interval try to download the checked page. Additional algorithms could be used at the moment - the page could be parsed for keywords to make sure that this is the one we are looking for (there are cases when an error page returns 200, Ok, http code, or when redirection is activated in case of error). If there is no error - fine. But if there is, it will be written down into the HostTracker log, which is easily available from the web. Then, notification are sent and a snapshot is made.

 

The snapshots could also be found in the log - if several errors were detected, a different snapshot will be available from each one.


Though there are some remarks. First, we do not run javascript while making snapshot - same thing for regular check. Second, the error must be detectable. I mean, the server must return something. In case of timeout or connection error - snapshot will not help, and only a corresponding record will remain in the log.

more
Tags: snapshot
Uptime and website monitoring

Website monitoring is a process of supervision over the performance of the site. HostTracker is a powerful tool which serves that purpose.

Website monitoring is a process of supervision over the performance of the site. Usually it is used to keep an eye on commercial sites or other pages for which high availability is very important.

Companies, which provide website monitoring services, let their clients to check a website, a server, port or other available from the Internet entity. The responses are collected and analyzed. Usually, the monitoring is performed from different locations, distributed over the whole world or the specific continent or country. Such monitoring is called distributed monitoringand helps to detect network-related issues as well as site or server-related. Also, distributed monitoring often makes possible to analyze the site performance from the places which are close to the real customers, instead of some distant locations which could have high latencies. The collected information could be found in a different form: email reports, graphs and different smart dependencies, to help the client to get a comprehensive view of the site's performance. Parameters like load time, speed and others can help to optimize the site performance. In case of critical error, monitoring services use variety of notification methods to deliver the alert to the client: SMS, voice call, instant messengers, email and others. This, together with immediate diagnostics, helps responsible persons like server administrators or developers to fix the site as soon as possible and minimize the duration of the failure.

Why is this important? For commercial sites, working time is proportional to their income. Roughly speaking, 2 hours of failure per day means that 1/12 part of the potential clients are lost. Even more - loyal customers also can find a more reliable competitor in case they can not receive their services/goods when they want. For other types of sites, like government, educational, NGO this is very important too. If people can't find the information fast, they'll find another source. Some parameters like site speed are important for search engines, others - database connectivity, for example - can greatly affect the convenience of usage of the site. Monitoring of some internal values like CPU load, memory consumption and disk space are important for administrators in order to prevent performance degradation. Another important purpose is verification of SLA (service-level agreement) of hosting provider. For technical issues, no site can be available 100% of time during long enough period. Sometimes servers are rebooted, updated or upgraded. So each hosting provider guarantee some specific value, which is called uptime, that shows how long can a site be down for technical reasons. Uptime is usually measured in percent. In this table it is shown for how long a site could be down for each uptime value:

  • 90%          876 hours
  • 95%          438 hours
  • 99%          87,5 hours
  • 99.9%       8 hours 45 minutes
  • 99.99%     52,5 minutes
  • 99.999%   5 minutes 15 seconds

If the real site performance does not correspond to SLA, it could be a reason for claim and refund request. Also, it can help the customers to select the best hosting for their needs.

Monitoring companies often provide some additional services, like vulnerability check, virus scanning, domain and certificate expiration check and many others, in order to make a useful product for their customers.

Different ways to monitor

The monitoring, based on the purpose, can be performed in several ways. Internal monitoring tools require some software to be installed into the monitored system, for example, corporate network. It helps to detect network issues, system performance and exclude or catch hardware and performance issues. External monitoring is performed from outside. It's purpose is check the availability and performance of the system by the third-party's eyes. Real user monitoring is external monitoring which simulates the real visitor of a site. Depending on complexity, such monitoring can analyze the page loading, content, sometimes - even design problems. Most advanced monitoring can offer its client to create a scenario for a visitor. This is called transaction monitoring and can perform some steps one-by-one: like load the page, navigate through menu, make a purchase. Passive monitoring is performed by a code which is integrated into the website code and sends specific information to the collector server each time the page is loaded. It helps to analyze customers actions on the site and analyze the traffic.

Monitoring services usually support different check protocols, and can monitor not only the websites, but also other entities like a file servers, mail servers, specific ports etc. Depending on task, monitoring interval can vary from several seconds to once per day.

more
Share:
Send to Twitter Send to Facebook Send to LinkedIn Share on Google+
Blogs:
HostTracker blog HostTracker page on Facebook