POST method example
POST method example Example of monitoring a webform with POST method.

For example you have such simple form:
form action="some-site.com/some-script.cgi" method=post
input type=text name=login value=''
input type=password name=password value=''
input type=submit name='Submit' value='Login'
/form
User fill this form with value login: Peter and password: 1234 and click submit. If everything is Ok, script prints "LoginOk" on the result page.
For making a monitoring task to control this script, create task with next fields:

URL: some-site.com/some-script.cgi
Http Method: choose POST
In "POST parameters" field you should add three strings:
login=Peter
password=1234
Submit=Login
In "Content check" field: LoginOk
Results: HostTracker with every check will fill this form and assume that it is OK if the set keyword LoginOk is returned.

  • CM.Glossary.WebsiteMonitoring
  • CM.Glossary.HTTPMethods
more glossary
"Thank you so much for your service. We were suspecting problems with our hosting company but they denied any problems saying the issues must be at our end. We know we do have issues at our end but still suspected that wasn't the entire story. Your service was able to prove that they are indeed going down regularly - on average twice a week during the trial period. Thanks again for providing the information we needed to make a proper decision on this issue."
- B.
Why a low uptime may affect not only your revenue, but also your company's reputation?

While you may be spending more time and resources on developing your website, you need to be sure that the core of the website is still performing well. There is a strong correlation between uptime and visitor conversions. Are you still wondering why is website uptime so important? Then take a deeper dive into this...

While you may be spending more time and resources on developing your website, you need to be sure that the core of the website is still performing well. There is a strong correlation between uptime and visitor conversions. Moreover, nothing drives your visitors to your competitors faster than multiple extensive and ongoing downtime issues. Are you still wondering why is website uptime so important? Uptime is being considered a critical metric of the website well-being. It reflects the percentage of time your website happens to be available to the visitors. Let’s take a deeper dive into this.

Track your website uptime or you put your business at risk

Imagine, you’ve already done all hard work – built a dream team of talented individuals (designers, copywriters, developers etc.), who are shouldering their delegated roles perfectly. And together you all created more than just a website - a master of the Google search that reflects on your personal brand. This suggests you put specific keywords and phrases throughout your website and your brand’s online profiles – so that if people go looking for you on Google, they are likely to find your website.

And there is someone who wants to visit your website, but a current outage is stopping him from camping on the webpage. No matter who you are, a multinational corporation or news portal, if your website goes down for even several minutes, it can impact negatively on your reputation, revenue, productivity and appeal among your visitors. Apparently, downtime is bad for your bottom line, but it costs differently within industries. Business size is the most obvious factor, but it is not the only one. Most of the visitors have a short patience span for even minor website hassles, especially when it comes to making major purchases. Therefore, ensuring your website stays up is a key to your successful business.

Inaccessible website is your clients’ losses

It seems that, inaccessible website puts the visitors off from using it. If your website is designed, for example, for reading books online but inaccessible at the moment, all your customers, including your loyal ones, wouldn’t be able obtain the information they need from your website. This suggests, all your readers won’t be able to get to know you and vice versa you won’t be able to connect to them. Imagine that right now there is someone who wants to buy or read a book from your platform. Try as he may, he can’t reach your website, because it’s closed. And now, he’ll think twice before going back to the platform that evokes frustration and irritation in him. He will definitely find another platform where buying and reading are much easier. Having an online 24/7/365 presence means you are very likely to gain more customers, increase your credibility and spread your business.

Server is down?

Reliability is a key piece of a good web hosting. Reliable web hosting providers not only try to keep the websites always online, secure, and fast, but also ensure that they’re reachable. Consequently, if you’re an unreliable hosting company, you put businesses that use your service at risk. Every minute their websites are down they are paying for it in business success and positive image among customers. What could it mean to hosting providers? In the era of an increasingly connected world, information spreads faster than ever. And we all know the power of customer reviews. Negative feedbacks mean little unless they have a profound effect on a company's ability to do business as well as the ability to stay ahead of the competition. When the company, in particular web hosting company, lost the faith of their clientele, this lead either to huge outflow of service consumers or potentially damaging situation for a brand.

Another example of how company’s reputation could be burned worst by downtime is an online store. Research shows that 60% of shoppers are surfing the net and even more read product reviews before making a purchase. There are many reasons why people choose not to shop at an online store, but perhaps the most striking is when specific website is down. Obviously, it would be a real struggle to keep the visitors on the website in case of a sudden server downtime. Downtime should be considered something to be avoided at all costs, because visitors are not going to wait at your doorstep. They’ll certainly leave your portal and never come back, as how they can lend an air of credibility to your website if you can’t help yourself…

Warren Buffet once said: “It takes 20 years to build a reputation and five minutes to ruin it”.

Out of a search engine?

Downtime issues contribute negatively to your search engine rankings as well. This implies, when Google try to rate your website and find out that it’s down, your website in most cases will temporarily drop in the Google search result rankings. Generally speaking, short periods of website downtime won’t hurt your search rankings that much, but long, consistent ones - will blow your rankings to bits. Scary?

Proactive monitoring of your website is the best way to stay one step ahead of any website bottlenecks and outages. You may rest easily using HostTracker service. HostTracker'll let you know whenever the incident is escalated, as well as, you will be in advance of your website issues. Spot problems before they arise and protect your business from losses that they can create!

 

 

 

more blog
Thank you for feedback!
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog > HostTracker_under_Azure

Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles...

​Those, who actively involved with the Web, should know HostTracker, a company from Ukraine, which has been supporting one of the leading global web monitoring services since 2004. Its goal is to monitor site health and accessibility in near-real-time access. Using alert message system, HostTracker allows to reduce downtimes, to improve quality of service for users, to quickly localize troubles, and etc.

Architecturally, HostTracker includes a server-based hub, acting both as a data collector and control center, and a series of software agents, launched in various regions – typically using the equipment operated by major providers, hosters and affiliates. The geographically distributed architecture provides common system reliability and also allows collecting data in terms of access speed, bandwidth and other key performance characteristics on regional level – a critically important feature for the international business.

The first version of HostTracker, which is still functioning and providing services for tens of thousands of customers, was Linux based. Today, it is supported by nine control servers, located and organized in two DPCs on collocation principle, and few dozens of agents. Considering that the final objective of web monitoring is focused on increasing the uptime of client-based web resources – whereas 95% of HostTracker customers were able to increase it up to 99% – then, performance and accessibility of the service itself are not just critical, but rather fundamental parameters that influence the whole business. Theoretically, HostTracker should demonstrate accessibility close to 100%. However, an extensive growth of the service made this task hard to solve.

HostTracker was facing constantly increasing network traffic – a problem for seamless operation of the service. Inability to add new control servers on-the-fly, difficulties when maintaining not uniform and multiple-aged hardware was another limiting factor. Moreover, the desire to develop the service through wider protocol and network service support was meeting certain obstacles. “Unfortunately, for Linux there was a limited choice of ready-to-use solutions and libraries, while inventing something completely new was difficult”, says Artem Prisyazhnyuk, HostTracker director. “We had an idea of reviewing the stack of technologies we used for a more sophisticated one and after taking a closer look at the .NET platform, its potential in terms of scalability and network support, I realized that was exactly the thing we had been looking for.”

It was sure that migrating to a completely different platform should be a complex task – the project extended over three years. However, it was like blessing in disguise: during this period, the world has seen the cloud computing that seemed an ideal tool for solving both the scalability problem and putting aside one’s own whole infrastructure. Besides, the PaaS model allowed to remove most of the effort in terms of administering the solution and to control the application as a self-contained entity, to the extent of complete automation, and thus, Windows Azure had in fact no alternatives.

As a result, the second version of HostTracker, commercial operation of which started in May 2012, is already functioning under Windows Azure. Its central ingredient is realized as Web Role and associated with SQL Azure Database – it provides external portal, analytics and report generation, control of monitoring applications. The latter are ensured with instances of Worker Role, which also use SQL Azure Database to store their data and to provide the service scalability depending on the network loading. Agents are functioning as they did before, with the viability of their transfer to Windows Azure being considered.
Now, HostTracker uses HTTP/HTTPS and ICMP protocols to monitor specific ports, including various methods (HEAD/POST/GET), and etc.
 

HostTracker instant check



Alarm reporting is available via email, SMS and instant messages. The customer can receive reports with statistics about resources being controlled and their performances. You can spend only 6 minutes to make monitoring settings for five sites, while the average response time in case of failure is limited by a couple of minutes, and it takes 1-3 minutes more to inform the customer about the problem. Using this service, anyone can check any site, including access from various regions.

 As a result, if on the one side the transfer to the .NET platform itself gave us the potential to modernize HostTracker, to optimize the application architecture and realize new internal functions, then, on the other side, the migration to the cloud allowed to refuse from less important, though time consuming activities such as administering the solution, and, first of all, to reach necessary performance indicators. Microsoft, for all basic Windows Azure services, declares 99,9% accessibility and guarantees monthly refunds, should this indicator be lower. This creates a firm ground for operating such services like HostTracker, as the accessibility is the most critical parameter for these applications. Using the cloud infrastructure also provides a better protection for the service: unauthorized access to the application and many types of attacks are effectively excluded, while the data safety is ensured by triple reservation.

HostTracker received another advantage from abandoning its own infrastructure. The service’s performance characteristics are also rather critical, for they directly affect the failure reporting system operation. In this respect, Windows Azure is virtually a drainless source of computing power. This means that by timely starting additional monitoring instances you can support HostTracker functioning parameters on the necessary level. Moreover, the cloud environment is exactly what you need in order to make this process almost fully automatic, excluding further need for direct control.

Share:
Send to Twitter Send to Facebook Send to LinkedIn Share on Google+
Blogs:
HostTracker blog HostTracker page on Facebook