A persistent issue with Amazon’s CloudFront is the lack of analytics available – what’s the point of hosting your video on CloudFront when you can’t tell if anyone watched it or if user’s quit after the first 3 seconds. Today’s announcement of streaming access logs goes some way to address this. Amazon will now log all streaming access events such as play, pause, seek, and stop. In addition user details such as IP address, and other data items are logged.
It’s worth noting that there isn’t an inbuilt analytics solution to view the data and user’s will have to arrange for the logs to be read by third party analytics services.
Posted in News
Tagged AWS, CloudFront
Amazon today announced the Beta of its SNS (Simple Notification Service) service. SNS is a “push” messaging service for delivering real-time notifications to subscribers. Amazon already offers a non-push messaging service (SQS) which is primarily used for distributed applications to communicate. SQS messages are persisted in a queue so that other connected applications can ‘poll’ the queue and pull messages when they are ready. For example, a cloud app taking customer orders may send the orders as SQS messages, these could then be picked up by an on-premise order fulfillment application at the factory.
SNS, by contrast, is intended for real-time communications and will probably be consumed by people as opposed to applications. In the above example, SNS could be used to send relevant parties a confirmation/notification email of the order received. Currently SNS supports HTTP, email, and email-JSON protocols but SMS is due to be added in future. Pricing for HTTP notifications is $0.06 per 100,000 notifications sent (the first 100,000 are free), and $2.00 for 100,000 Email/Email-JSON notifications (first 1,000 are free).
This is a very useful addition to the AWS platform and will greatly reduce the amount of effort developers had to put in to create notification systems.
See the AWS SNS page for full details.
Posted in News
CloudKick , the cloud monitoring startup has just announced the availability of CloudKick Hybrid. The Hybrid product allows admins to monitor both on-premise and cloud server in a single control panel.
CloudKick’s monitoring tool is web-based and tracks load, CPU, bandwidth, and memory. It can also run diagnostics tests and send SMS and email alerts when thresholds are breached. Using CloudKick Hybrid, admins could monitor the performance across several cloud hosts (such as AWS, Rackspace or GoGrid) from one central web console. With the addition of CloudKick Hybrid, on premise severs can also be monitored from the same console.
Currently CloudKick is primarily a monitoring tool but it has demoed cloud mobility solutions which allow admins to seamlessly transfer apps between cloud services from the CloudKick management console.
Posted in Articles
Windows Azure is just 2 months old and there are still a lot of unknowns regarding it. A couple of articles which might help clear some things up:
Posted in Articles, Reviews
Microsoft and Amazon announced Windows License Mobility which allows enterprise license holders to migrate their licenses to Amazon’s EC2 instances. The program is currently running only as a pilot and Microsoft only allows the license to be used for Windows EC2 instances for a period of one year. After one year (if the program is not extended) the license will revert to a standard Windows Server license to be installed on a dedicated machine and EC2 pricing will revert to standard Windows EC2 rates.
The signup process is quite lengthy, and involves filling out registration forms, requesting Microsoft’s confirmation and then having Microsoft and Amazon countersigning the agreement. The process is unlikely to be completed in less than 10 days.
Windows Server Standard Edition entitles the holder to run a single EC2 instance, whereas the Datacenter and Enterprise editions allow for four EC2 instances. There is no distinction between the various EC2 instance sizes.
Once a user has been approved, pricing drops to same as the Linux EC2 instances (ie approximately 30% less than the standard Windows EC2 instances). Pricing is as below:
|Standard On-Demand Instances
||Windows Pilot Usage
||$0.085 per hour
||$0.12 per hour
||$0.34 per hour
||$0.48 per hour
||$0.68 per hour
||$0.96 per hour
This offering is currently being limited to license holders based in the US.
Akamai, the leading CDN vendor, just announced the introduction of the Akamai Download Analytics. Getting useful data on files served over CDN’s has long been a pain-point for CDN users. Akamai Download Analytics hopes to address this by providing the following types of analytics:
- User engagement data such as download duration and completion rate.
- User analysis by geography, network and connection speed
- User behavior data, such as start, stop, pause, and cancel
- Usage metrics, such as bytes delivered and bytes per download
- Custom reporting, such as the ability to define data sets, reporting dimensions, and metrics.
Posted in News
Rackspace just announced that its offer Oracle on its Cloud Servers product. AWS has had Oracle offerings for some time now and the announcement by Rackspace is clearly a move into the enterprise market which is a major target of AWS. No word on pricing as yet as this is still in Beta – see here for details.
Over the years I think I have migrated about 30 sites (normally to different CMS platforms) and there are always unique issues involved in every migration. Although each migration is unique there are some common issues that need to be addressed in almost every migration:
1. 301 Redirects
Unless you are just moving a site between two server and using the an identical platform it is likely the page urls will change which means that all inbound links and search engines will be looking unsuccessfully for the old page urls. A 301 redirect from the old page to the new page is the method all search engines prescribe for switching page urls.
In Apache this can be done in the htaccess file or each of the old pages can have a 301 redirect header to point to the new page (this is the only method available of ASP.NET sites).
2. Prevent Pages from Being Indexed on the Staging Site
Before the site goes live the content will be migrated to the new location and there will be testing of the new site which means it will be visible on the internet. Whilst it needs to be visible for testing you will not want search engines to start spidering your site and attempting to index the content under its testing url. The testing url is typically not published anywhere but search engines always seem to find new site – for example when you are communicating with other site workers or freelancers sometimes they will put a on a forum.
Simply add a robots.txt file to the root of the site with the text :
This will prevent any indexing, but be sure to remove this when the site goes live.
3. Communicate with Site Users
If the migration is accompanied with a redesign it is usually obvious that a migration has occurred but you should always inform site users of what is going on with the site that they visit. Normally a notice on the site, email, and twitter notification is best. Try involve site users as much as possible to make them feel more engaged with the site and also help in the trouble shooting – ask users to let you know about bugs or usability issues and maybe run a small competition with a giveaway.
4. Watch For Errors
Regardless of your testing, there will be errors once your site goes live. Firstly, an error landing page should be made to let the site fail gracefully and also encourage users to communicate with you if there is a persistent issue. Bugs should be tracked, normally through a bug capturing database but just looking at site logs can be sufficient for simple content sites.
Once the live is live you should also closely watch the site’s response time for indications of site issues or problems with the new host. Pingdom is a good tool for this.
Backup the entire legacy site and include everything that is required to recreate the site, database, config files, web pages etc as you may need to reestablish the site later.
6. Image / File Links
Most migrations will involve a change in directory structure so images in the pages and links to files or downl0ads will need to be changed. A migration is an ideal opportunity to host all images and files with a CDN.
Posted in Articles
Continuing from Speed up Your Site Part I :
6. Minimize Hits to the Database
Why is caching so effective? It reduces requests to both the server for processing and to the database for data. Database operations are very expensive in terms of resources and so you should review your code to minimize hits to the database. COnsider the following:
- Ensure that you minimize the number of connections opened to a database in a visit. Once a database connection is opened, try to perform as many database operations as possible and then close the connection. Do not open and close connections several times unless this is necessary.
- Open database connections as late as possible and close them as early as possible – ie don’t do additional processing that isnt necessary whilst the connection is open, grab the data, close the connection and then do additional processing.
- Review your SQL code to ensure it is efficient – several SQL operations can be performed in one operation and it is not always necessary to execute separate insert and select statements. Ensure you don’t use Select * in your queries – always select just the columns you need.
7. Optimize Images
Maybe it is just my perception, but I always remember this being first on the list of a site optimization checklist and now it hardly even features in a top twenty. I guess there are more interesting things to talk about, but optimizing images is still a major factor in reducing page loads. Use GIF where-ever possible, screenshots and logos are normally a must for GIF. GIF’s max colors in a file is 256 but you can normally reduce this without compromising quality and reduce the filesize. JPGs should be examined even more closely as they are normally larger files, you can set quality anywhere between 1-100 and normally around 65 is an acceptable compromise between quality and size. Similarly you should optimize PNG files for size vs quality. Photoshop’s Save For Devices tool is invaluable for this purpose.
Also, always use the height and width attributes of the <img> tag as this speed up loading, but do not scale and image using these attributes – these should be the actual image size.
8. GZip Everything
Gzip is a popular compression protocol and about 95% of all web traffic is on browsers that support Gzip decompression. Therefore it is safe to say the all your Http traffic delivered to the user’s browser should be Gzip compressed. Putting Accept-Encoding: gzip, deflate in your http header informs Apache that the content is Gzip compressed and it will direct that to the user’s browser with an appropriate header to instruct the browser of the content type and to decompress it. Most site’s will Gzip http pages but css and js files should also be Gzip compressed.
9. Reduce the Number of Http Requests
Http requests are expensive as they require round trips to the web server, consider the following to minimize the number of requests:
- Combine files – dont have several .css or .js files unless this is strictly necessary, just combine these into a single large file.
- Combine images – if you have several images for your site’s design consider combining the images into a single image file and then selecting a segment by using the CSS
background-position properties. The CSS Sprites tutorial on A List Apart is a good tutorial.
10. Examine Your Final Page
This isnt really a separate best practice in itself, but it is probably the most effective way to find page bloat. Too often the optimization is done by reviewing all the sever-side files but when a page is generated there are often new items added to the header which werent noticed or were added by a script that was installed. Starting at the optimization at the final generated page is the best technique. This will also identify html bloat, such as empty tag (empty <span> and <div> tags are infamous for populating pages).
Great video from James Hamilton of AWS demonstrating the eocnomics of data centers and cloud computing. Ever wonder where the cost is consumed or why cloud computing is more efficient. Did you know that power is only 13% of the cost of running a data center but it is the key reason that Amazon decided to lease out its services? This video explains all!
Check it out here
Posted in News