Uptime
Uptime Uptime is the period of time when a site performs well.

Uptime corresponds to the time when a site is accessible from the Internet. The opposite term - downtime - shows for how long a site has not been working during specified period of time. Usually uptime is measured in percents, and for period of time is choosen year. Percents over the year could be easily transformed into time values. Some typical values of uptime and corresponding period of unavailability during the year are shown here:

90% - 876 hours

99% - 87 hours, 36 minutes

99.9% - 8 hours, 45 minutes, 36 seconds

99.99% - 52 minutes, 34 seconds

So high uptime is really important. Even if it seems that 99% is pretty high value - it corresponds to several days of failure. If that happens in a row, many clients can be lost. Uptime value is usually guaranteed by web hosting, where the site is hosted. Website Monitoring may help you to increase the uptime and check if the value, declared by the hosting company, is real.

  • CM.Glossary.WebsiteMonitoring
  • CM.Glossary.Downtime
  • CM.Glossary.WebHosting
  • CM.Glossary.Availability
more glossary
"

Very usefull service, helps us to monitor activity of our sites.

"
- A.
Database Monitoring with HostTracker

We’re happy to introduce our newest monitoring feature - Database Check – that is easy to use, crystal-clear to understand and designed to get you through your website ‘critical hours’ as smoothly as possible.

We’re happy to introduce our newest monitoring feature - Database Check – that is easy to use, crystal-clear to understand and designed to get you through your website ‘critical hours’ as smoothly as possible.

There is a wealth of different collector services for gathering and analyzing performance information from, for example, the number of visitors, disk usage to the duration of a database session and geographical distribution of the audience that visits the website. In the real world, it is very common to find two or more of these metrics presented together. Anyway, the problem is that you should not only assess these numbers but also somehow examine and compare them.

All that made HostTracker team eventually develop a Database Check – a perfect tool for deep database monitoring and successful troubleshooting database performance problems.

Task Configuration Concepts

Generally speaking, adding a new Database Check won't take long to set up. Once it is enabled, you’ll get the chance to adjust the check to suit your overall strategy. Now let’s look at some of the available options to better understand how they can be effectively applied.

First and foremost, this feature has an option to include a specific database query every time you run the check, whilst still having the opportunity to manage the processing data. If you don’t want to specify any query – the service will verify the ability to connect to the database. In addition, the Database Check tool supports a deferred execution option, which allows you to specify the point of query declaration and track its results.

Besides that, you can use any command - from a simple Select statement to a more complex procedure, - as a database query. However, the specified request should be executed in 30 seconds or less, otherwise, the error message will be generated. Basically, you’ll get the 408 Request Timeout or related error.

Please note: When enabling a new DB monitoring check, there are a couple of things to consider. Use an SQL statement that returns a single value. Furthermore, this value should be returned in the first column of the first row. This step is vital for further performance analysis of the monitored system.  

At the same time, in case of using a DML command as a statement, you’ll also get the total number of rows being affected.

The following example shows the graphical interpretation of the results from executing DELETE statement, according to the specified condition:

For the record, the collected results are not only displayed in a real-time graph, but also stored for later analysis. Besides, such a solution can really help to get valuable insights into how to optimize your database performance.

Moreover, at this stage, you can specify the type of selection criteria. You can choose from no, equal/not equal, greater/less than, in/out of range.

Once the system finds some deviation in records from the expected results - you will receive a notification. To boot, you can control which events you want to receive alerts for and which ways (Skype, Viber, Telegram, Slack etc.).

First Steps In Starting Successful Performance Troubleshooting

This example shows how to create a check for both tracking the growth of all the database log and data files and getting alerts when critical database size is reached. It implies, the following example contains information about the file data/log file size, the total space used, free space details etc. So what you need to do:

  1. Create a new query that displays how much free space you have in your tablespace.

          SELECT
          convert(DECIMAL(12,2),round(sysfile.size/128.000,2)) AS 'FileSize/mb'
          , convert(DECIMAL(12,2),round(fileproperty(sysfile.name,'SpaceUsed')/128.000,2))
          AS 'Used/mb'
          , convert(DECIMAL(12,2),round((sysfile.size-fileproperty(sysfile.name,'SpaceUsed'))/128.000,2))
          AS 'Free/mb'
          , filegroup.groupname AS 'File-group'
          , sysfile.[name],sysfile.[filename]
          FROM dbo.sysfiles sysfile (NOLOCK)
          inner join dbo.sysfilegroups filegroup (NOLOCK) ON filegroup.groupid =
          sysfile.groupid
          UNION ALL 
          SELECT
          convert(DECIMAL(12,2),round(sysfile.size/128.000,2)) AS 'FileSize/mb'
          , convert(DECIMAL(12,2),round(fileproperty(sysfile.name,'SpaceUsed')/128.000,2))
          AS 'Used/mb'
          , convert(DECIMAL(12,2),round((sysfile.size-fileproperty(sysfile.name,'SpaceUsed'))/128.000,2))
         AS 'Free/mb'
         , (CASE WHEN sysfile.groupid = 0 THEN 'Log' END) AS 'File-group'
         , sysfile.[name],sysfile.[filename]
         FROM dbo.sysfiles sysfile (NOLOCK) WHERE groupid = 0
         ORDER BY [File-group],sysfile.[name]         

      2.  After execution, you should get the following results:

     3.  Add a condition of selection:

  • Query result - select "value in the first column of the first row (mainly for SELECT)".
  • Result verification - choose "less than" and add "1000"  as the max value.

Finally, you’ll get the following result once all the previous stages have been successfully completed: if the log file size exceeds 1 GB, you’ll receive a notification.

Please keep in mind all data collection history information is saved and always available to view. It means you can easily identify the cause of this or that problems, for instance, the reason for rapid tablespace growth.

Adding a New Database Monitoring Task

To activate a new Database Check you need:

      1.  Fill in the following fields:

  • Server – enter your server name;

  • Port -  add your port number;

  • Database – add your database name;

  • User ID- enter the login name under which the check should be executed;

  • Password – add the password that corresponds to your login.

For this task only, it is recommended to create a new user account with limited rights.

       2.  Provide access to your database. For this, add the IP addresses of the HostTracker agents to a Firewall Whitelist and your server list.

Note that, the IP addresses of our agents are permanent.

       3.  When ready, click on Save.

If you have any questions about this feature, well, just send us a message. We’re always ready to help!

more blog
Thank you for feedback!
 
Sign In
Sign Up
Prices & packages
Our monitoring network
Home > Blog > Improve_site_uptime

It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance...

It is rather strange, but instead of going down web services usually grow so overloaded with the user requests that after slowing down they become unresponsive. But nowadays it is extremely important to provide secure and high-quality web hosting services, available anytime, as e-businesses is acquiring popularity. The owners of the websites demand perfect service, 100% uptime and quality assurance. Numerous techniques may be used to make access to the website smooth, thus increasing uptime. Site uptime informer

Here we offer you twelve methods recommended to achieve performance gain and uptime increase. Software optimization is required for that, as well as hardware optimization. A lot of software features may be upgraded with the use of general and improved coding standards, operated by the website manager. At the same time the company providing the webhosting services needs to improve the hardware constantly.
You can measure the accessibility of the website using monitoring services HostTracker.

Software Optimization
The following are the first six ways to improve uptime:

  • Split the databases
  • Separate the read an write databases
  • Use popular content caching more often and improve its quality
  • Optimize static content
  • Ensure compressed delivery of content
  • Optimize the content management system


To begin with, splitting  the databases – horizontally or vertically, or combining these two directions - is essential, making the connection more reliable. It is also good to differentiate the read and write databases, this giving an opportunity for master/slave setup. These actions are useful to extend the database infrastructure for future use.

Second, when you set the system to use better popular content cache, and to use it more often, your site will scale up more easily with many users operating it. Internet caching has no difference from computer caching: it supposes storing popular content at a separate container, allowing much quicker access to the information for the users.

To optimize static content is one more way to make the access to internet pages and files quicker. One of the means for it is to compress the images to the maximum extent possible (but, of course, preserving their high quality). Moreover, it is necessary to check if the web server may be used to deliver compressed content; this characteristic is not connected with the images, as they are already compressed files. Take care to have all the appropriate settings from the beginning.
.
One more appropriate thing to do is to improve the system of content management reducing the database calls number for each page request. It is like any type of connection: at the decrease of the information sending times the connection is maintained. In this circumstances, the number of calls to the database should be as low as possible – it will ensure that the users are able to access the content with the greatest speed.

Software and Hardware Optimization
The next six methods for augmenting uptime are:
Six more ways to augment the uptime are the following:

  • Use content delivery networks
  • Use emerging standards, such as HTML5
  • Improve the programming techniques
  • Add the content “expires” headers
  • Lessen the number of HTTP requests
  • Use Ethernet connections, allowing for more speed


Contend delivery networks allow for operation over larger amounts of media, at the same time improving the performance of the site. They are developed to direct the traffic to private networks. Such services allow operation over large media files, rooting the traffic along the Internet age, not straightly, averting extreme overloading. As the content delivery networks operate with large files, this unloads the servers, providing quick and qualitative connection.

Emerging standards, such as HTML5, include systems, improving the websites. It is achieved through advanced programming techniques, aimed at website and internet communications. Such standards do not 100 % assure that the website that uses them will not go down, but the in-built mechanisms of the code will act as the auxiliary mechanisms if it happens. Moreover, you should use improved programming methods while working with large loads and traffic spike.

“Expiers” content headers make all the automatically downloaded files cacheable for the visitors. So adding these headers to the content you will prevent the constant downloading of the browser. It blocks page review pointless HTTP requests. As well as the reduction of the database calls number, the reduction of the HTTP requests number will make the connection speed stable and not overloaded.
The last method is to develop Ethernet connection speeds. This will allow to cope with larger files and unexpected traffic spikes. A lot of hosting providers consider it as an excellent investment.

Speeds improvement and downtime reduction make the customers happier. Many of these methods to improve the web server or web site uptime may be achieved in a few minor steps. Such tactics, including Database rearrangement software optimization, caching improvements, content compressing, use of content delivery networks and content management systems, programming practices improvements and hardware upgrading will improve the web host as well as web site uptime, thus also improving your business.

 

 

 

 

 

 

Share:
Send to Twitter Send to Facebook Send to LinkedIn Share on Google+
Blogs:
HostTracker blog HostTracker page on Facebook