Uptime
Uptime Uptime is the period of time when a site performs well.

Uptime corresponds to the time when a site is accessible from the Internet. The opposite term - downtime - shows for how long a site has not been working during specified period of time. Usually uptime is measured in percents, and for period of time is choosen year. Percents over the year could be easily transformed into time values. Some typical values of uptime and corresponding period of unavailability during the year are shown here:

90% - 876 hours

99% - 87 hours, 36 minutes

99.9% - 8 hours, 45 minutes, 36 seconds

99.99% - 52 minutes, 34 seconds

So high uptime is really important. Even if it seems that 99% is pretty high value - it corresponds to several days of failure. If that happens in a row, many clients can be lost. Uptime value is usually guaranteed by web hosting, where the site is hosted. Website Monitoring may help you to increase the uptime and check if the value, declared by the hosting company, is real.

  • CM.Glossary.WebsiteMonitoring
  • CM.Glossary.Downtime
  • CM.Glossary.WebHosting
  • CM.Glossary.Availability
more glossary
"Really amazing Service, congratulations."
- W.
Host-Tracker under Windows Azure

Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...

​Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.

Architecturally, HostTracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.

The first version of HostTracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of HostTracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, HostTracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.

HostTracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, HostTracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”

It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.

As a result, the second version of HostTracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, HostTracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.
 



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.

 As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize HostTracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like HostTracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.

HostTracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support HostTracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.

more blog
Thank you for feedback!
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog > blacklisting

Anyone can be blacklisted – so what can you do? How can you detect such problem? Are there any protection mechanisms? This is what this article is about.

Are you sure that your domain isn’t currently listed with some anti SPAM-database? We recommend you to check this out!  As you can be one of those who accidentally fall into being blocked due to spreading spam, malware, viruses or just having dangerous, illegal or prohibited content (links to such content) hosted on their pages. Apparently, anyone can be blacklisted – so what can you do? How can you detect such problem? Are there any protection mechanisms? This is what this article is about.

Why is this happening?

There are tens of thousands of different DNSBL-servers up for grabs out there and they all have their own set standards and criteria for which IP should be blacklisted. Moreover, these conditions are constantly changing. Therefore, there’s no guarantee that even a law-abiding domain which meets all standards of copyright and trademark laws, the laws of privacy and publicity, and other applicable regulations and statutes of law, one day wouldn’t be filtered out by the search engine as one that spreads spam.

Actually, HostTracker’s team recently faced the risk of being banned too. HostTracker, in its capacity as a website monitoring service, sends daily many various reports to the people that requesting them, hence it was easy to determine that some of our emails hadn’t been delivered and were dumped into a spam folder.

So what should you do?

Understanding the problem is the first step to fixing it. So it’s crucial to be aware of the warning signs of listing threat and be able to quickly take steps to cut back if you recognize them. Since we recommend adopting our new feature - "DNSBL" on the "Check site instantly" bar - we have implemented just for you:

This option is not exactly brand new, as there are dozens of different check services out there (for instance, Mxtoolbox), which offer a convenient help in testing whether your domain is on some blacklists. But soon we realized that we also needed to arm HostTracker with this check, as without having had such tool in place, we were flying blind.

In light of what was written above, we provide the illustration of how the "DNSBL" check works:

If HostTracker detects some suspicious activity from your IP address, you’ll be notified immediately with an alert of a possible listing reason. For you it means - the ability to track the reason why the site was blocked to its origins.

Usually, the domain will be blocked not at once but only after a thorough analysis of many factors. If you have already discovered your IP on one or more blacklists, don’t panic, more often than not, all you need to do is to follow their specific removal process. That implies, you should visit the corresponding blacklist's website and, following the removal instructions, enter the domain name and press "Delist". Additionally, you may need to contact the blacklist with a removal request. Note that, previously you have to resolve all issues that caused the blacklist.

So, if you happen to discover your IP address involved in spamming – resolve the underlying issues and you’ll be removed automatically, once a certain period of time elapses. How to stay protected and off of blacklists? The best option is to use PTR or SPF for messaging, as this’ll eliminate the likelihood of your domain name getting fraudulently spoofed and abused by spammers, or blocked for appearing to use a dynamic IP address.

There's no single cause of being blacklisted. Generally speaking, if your IP has been tagged illegal accidentally, the removal process won’t take much time and effort. But it is essential to be proactive, so that to be able to act fast and be in tune with such situations. The sooner you find out about the issues, the easier they are to deal with and the more likely that your domain name won’t be deemed illegal among the entire set of DNSBL databases. Use our new option "DNSBL" on the "Check site instantly" bar to be always aware of the "status" of your resource and never be tossed and turned by the tides of website hassles and outages. Remember, we’re looking forward to any interesting comments and suggestions regarding our service and its work.

 

Share:
Send to Twitter Send to Facebook Send to LinkedIn
Blogs:
HostTracker blog HostTracker page on Facebook