As the growth of global online spending increases, there is the recognition that people relating to services and products in their native language spend more. We'll all agree that there is nothing so off-putting as poor translation. It undermines the credibility of a site or app. If that is true for us English-speakers, it is also true for other languages. So, are you truly global? Are you aiming to reach non-English speakers? If so, how?
The whole question of what localisation is about has been analysed and examined in the last few years. And, unsurprisingly, the needs have increased across sectors and within development. See our earlier blogs for more on these aspects and Locaria, a dedicated localisation company, explains what it does at: www.locaria.com.
What's the fuss about then? We've already covered some of the basic issues, such as localisation doesn't just apply to words/text but to visuals and colours, but now the experience shows that apposite keyword use in different countries can affect the site's performance. It's not just translating the word again, but the gist according to the country's culture at a particular time. This is why you'll see some localisation companies linking themselves to SEO for different cultures. The games industry has woken up to the need for localisation to help sell their offerings in a particular country in a particular market and through a particular media channel. All of these affect the reception and spend. They have also realised that a game itself needs clear instructions in the native language to aid the user experience. Yes, localisation does cost a lot but you have to look at the return to understand the balance. We are talking global products and services here, remember.
It may come as a bit of a shock to you that experience shows that you'll save time and money if you plan for localisation in the development stage of the applications that are to be publicised internationally. You may well have to fight the fight with web developers and graphic designers because adding in globalisation can affect the design quite radically. For example, browsers seem to be poor in handling non-Roman character sets, especially if they read right-to-left. Text flow on the page can look significantly different when you move to, what are for us, complex character sets (ones that need double-byte encoding such as Chinese). See the Create an adaptable interface design section at Commercial Translation Centre's site. They highly recommend a fluid layout rather than a fixed-width design. With adaptive layouts already coping with mobile phone and tablet displays, can we consider anything to be 'standard' any more?
There's an increase in the number of jobs for localisers with iMedia experience. Companies dedicated to offering translation and localisation for iMedia have emerged. With the growing experience, the expertise on how to approach localisation professionally increases. Companies are also advertising jobs in this specialism as they recognise the need for a specialist to communicate with other specialists.
You know that a specialism has arrived when specialist tools are offered. Some have emerged for localisation and translation, but we haven't had any experience of them to comment. Have you? Finally, there are a few specialist training courses for companies that recognise their need for such services to iron out some of the problems that localisers find when they try to localise an application. SeeITR ( International Translation Resources } and their Best Practice Seminar on Software Localisation.
Friday, 26 July 2013
Monday, 22 July 2013
When web traffic hits eleven
There's the famous Spinal Tap joke, about the musician who believes that because his amplifier goes up to eleven (when the volume knob usually only goes up to ten) it is the most powerful. That iconic joke has become a shorthand with the same meaning as 'giving it all the wellie you can' and 'going for the max'. (It even has its own Wikipedia page.)
A web service I manage recently had a 'up to eleven' moment when I was called one evening recently because the system had stopped/crashed and it seemed that a lot of queries were coming in from one source. It turned out that one of the users of the service had decided to carry out an unscheduled load test and had inadvertently overloaded us.
This got me thinking out how to deal with spikes in web traffic: what causes them and how can you deal with them (and following on from Elaine's notes on testing last time). In the real (ie non-testing) world the kind of thing that generates a rush of traffic is something like a nationwide TV ad promoting the URL. It may cause an almost instantaneous rush of people to their computers, tablets etc, similar to the legendary burst of electricity when people all over the country put the kettle on during half time in the cup-final (it's a UK thing but I'm sure every country has its equivalent).
Of course even if everyone pushed the button simultaneously the requests would not hit the servers at the same time. Network latency tends to spread the traffic out over a short period; we can assume a bell-shaped gaussian distribution. Also not everyone will actually do the action at the same time which further spreads the curve. In the end what proves crucial is a combination of the peak traffic per second and how quickly you process that traffic. If the application takes time to process the requests (because it's consulting a database and making calculations) then several queries will be working their way through the system at the same time, all slightly out of sync with each other. If you have a bottleneck because you're not processing queries as fast as they're arriving, then initially your response slows down and eventually it may grind to a halt.
Some of this load is handled by your web server and some by any back end database. Either or both of these can be a bottleneck, and it can be especially difficult if both run on the same machine or virtual machine.
The Whitesites blog has some musings on How much traffic can one server handle (Feb 2013).
Using the cloud can help, as long as you know the spikes are coming, since you can spin up more capacity at relativly short notice ... but some notice is required. Doug Heise on theiMedia Connection discusses How to prepare your website for a traffic spike and goes further than the technical issues since some of the planning for such events should be driven by marketing. He notes that Gartner say Marketing budgets will exceed IT budgets by 2017.
Your hosting company will help, (as this post on Atlantica outlines: How To Find A Hosting Solution That Handles Traffic Spikes), I would add that the technical support engineers will probably know a lot more about how the equipment works at a low level and can use a variety of tools you may not have heard of to suggest how you can overcome speed problems.
Eventually it may come down to a cost-benefit analysis where you decide how likely it is that you'll be hit by spikes and can you afford to go off-line for short while if the worst happens. After all, real people don't usually behave like load testing programs ... do they?
A web service I manage recently had a 'up to eleven' moment when I was called one evening recently because the system had stopped/crashed and it seemed that a lot of queries were coming in from one source. It turned out that one of the users of the service had decided to carry out an unscheduled load test and had inadvertently overloaded us.
This got me thinking out how to deal with spikes in web traffic: what causes them and how can you deal with them (and following on from Elaine's notes on testing last time). In the real (ie non-testing) world the kind of thing that generates a rush of traffic is something like a nationwide TV ad promoting the URL. It may cause an almost instantaneous rush of people to their computers, tablets etc, similar to the legendary burst of electricity when people all over the country put the kettle on during half time in the cup-final (it's a UK thing but I'm sure every country has its equivalent).
Of course even if everyone pushed the button simultaneously the requests would not hit the servers at the same time. Network latency tends to spread the traffic out over a short period; we can assume a bell-shaped gaussian distribution. Also not everyone will actually do the action at the same time which further spreads the curve. In the end what proves crucial is a combination of the peak traffic per second and how quickly you process that traffic. If the application takes time to process the requests (because it's consulting a database and making calculations) then several queries will be working their way through the system at the same time, all slightly out of sync with each other. If you have a bottleneck because you're not processing queries as fast as they're arriving, then initially your response slows down and eventually it may grind to a halt.
Some of this load is handled by your web server and some by any back end database. Either or both of these can be a bottleneck, and it can be especially difficult if both run on the same machine or virtual machine.
The Whitesites blog has some musings on How much traffic can one server handle (Feb 2013).
Using the cloud can help, as long as you know the spikes are coming, since you can spin up more capacity at relativly short notice ... but some notice is required. Doug Heise on theiMedia Connection discusses How to prepare your website for a traffic spike and goes further than the technical issues since some of the planning for such events should be driven by marketing. He notes that Gartner say Marketing budgets will exceed IT budgets by 2017.
Your hosting company will help, (as this post on Atlantica outlines: How To Find A Hosting Solution That Handles Traffic Spikes), I would add that the technical support engineers will probably know a lot more about how the equipment works at a low level and can use a variety of tools you may not have heard of to suggest how you can overcome speed problems.
Eventually it may come down to a cost-benefit analysis where you decide how likely it is that you'll be hit by spikes and can you afford to go off-line for short while if the worst happens. After all, real people don't usually behave like load testing programs ... do they?
Labels:
load testing,
security,
testing
Friday, 5 July 2013
A forage into testing for iMedia
As platforms develop and have widespread use, the need for robust applications increases. After all, you don't want Joe public's tweets turned on you for poor performance of an application! But as the complexity of functionality increases for the applications, the risk of 'breakdown' increases too.
This means that testing applications under real-use scenarios becomes more important. However, it is difficult to match the experience of a tester with useful tools of the trade, and account for the constant up-skilling they need to tackle emerging situations. Do you value your testers? You should. They are your quality assurers. They protect your reputation – if you let them.
How do you know you're employing the equivalent of a professional tester? Good question, and Epicentre: testing and support addressed this in epicentre.co.uk/what-defines-a-professional-tester (21st June 2013).
How vulnerable are different sectors of clients like insurance, law, healthcare, financial services, it, telecommunications, UK government, media and advertising for testing nightmares? See the white paper, Web Application Vulnerability Statistics 2013, Jan Tudor, Contextis (June 2013). You might find some surprising information that will affect some of your projects.
There may be a bit of friction between the developers and testers because they have opposite approaches to life and the universe. Developers code to get things working; testers work hard to break the code to ultimately strengthen the development. Easy to see the potential for friction. Ericka Chickowski goes further than this in Getting the Most from Web Application Testing Results, explaining that often the lessons learnt from testers are not implemented by coders because they can hardly communicate with one another. Is this a problem for you? Did you even know it might be?
We all want to improve the quality of performance, surely.
This means that testing applications under real-use scenarios becomes more important. However, it is difficult to match the experience of a tester with useful tools of the trade, and account for the constant up-skilling they need to tackle emerging situations. Do you value your testers? You should. They are your quality assurers. They protect your reputation – if you let them.
How do you know you're employing the equivalent of a professional tester? Good question, and Epicentre: testing and support addressed this in epicentre.co.uk/what-defines-a-professional-tester (21st June 2013).
How vulnerable are different sectors of clients like insurance, law, healthcare, financial services, it, telecommunications, UK government, media and advertising for testing nightmares? See the white paper, Web Application Vulnerability Statistics 2013, Jan Tudor, Contextis (June 2013). You might find some surprising information that will affect some of your projects.
There may be a bit of friction between the developers and testers because they have opposite approaches to life and the universe. Developers code to get things working; testers work hard to break the code to ultimately strengthen the development. Easy to see the potential for friction. Ericka Chickowski goes further than this in Getting the Most from Web Application Testing Results, explaining that often the lessons learnt from testers are not implemented by coders because they can hardly communicate with one another. Is this a problem for you? Did you even know it might be?
We all want to improve the quality of performance, surely.
Labels:
developers,
testing
Subscribe to:
Posts (Atom)