top of page

#1 - Heuristic evaluation - Part 1


Written transcription of the podcast :

Welcome to the Parlons UX Design podcast. I’m Thomas Gaudy, a UX designer specializing in video game accessibility for people with disabilities.

We propose to start a series of audio podcasts whose purpose is to explain the different methodologies and theories that I, for my part, know - because obviously I don't know everything - in the field of UX Design.

I work at Ludociels pour tous, a non-profit organization whose goal is to design video games with a social purpose. We are mainly interested in accessibility. Our current business model is to work with different types of clients, including schools for teaching engagements. We have to be very careful about how we use our time. Some things take a lot of time and can be just as frustrating for students in the two schools, which are ITESCIA in Cergy-Pontoise and ISART in Montreal. It's a matter of having to repeat the same theoretical notions from one year to the next. I like to see how the students take this theory and adapt it to their own projects and concerns.

There's a kind of tug of war and I think the podcast can be a good idea to save everyone's time. It would allow students to have a theoretical base that seems, I hope, a little more organized, available at all times, and for free. It could also be useful for other people who are not necessarily in these schools. It would free up my time to accompany the students in their projects and to perhaps be more precise on really very specific points.

I don't know how fast these podcasts will be published. The general themes will be theory and methodologies in UX Design.

For this first podcast, I will talk about heuristic analysis. We will also talk about playtests, but I would also like to see what is particularly important for me - again, I would have to be very careful about what I think is important - in order to successfully conduct a heuristic evaluation. So I propose to students in general, a template document that includes several pieces of information, and what I propose is to take several pieces of information one by one to explain and justify what they are saying. This will make it a little easier to utilise this template.

As a general remark about the process, whatever the type of analysis, the purpose is not to make another report, but to make a game project that you are embarking on.

So in principle, you're going to do tests and analyses. It will delay your development effort, but what it will do is that it will allow you to identify bugs or worse problems on the players' side more quickly. If you wait until your game is almost finalized, it will certainly be easier to observe them with your players, but it will be too late for you to implement them in your project, to take them into account.

It's very important to have different methodologies. In class, I explain that before arriving at a final product, there are different methodologies that are very practical. In order of strength, there is the Alpha version or even better Beta, which is the release in Beta Access, for example, to a community of pre-selected players who will be able to give you remote feedback on the quality and flaws of your game. That's very good, it's even more powerful if you go along with this automatic data analysis process to understand what the game times are, where your characters, your players are failing, or taking too long to understand what's going on. That kind of thing is really very, very good. But your game has to be strong enough and satisfying enough so that players don't turn away from it when you make that early "release".

For that, you'd be wise to ensure the quality of this project that you will release at a stage that is by definition finished. Thus, there are the playtests beforehand. You can do them directly at home in your studio or meet the players at fairs or different events. In general, it's the most efficient method from my point of view to get direct qualitative feedback, I'm talking about qualitative and not quantitative. To see what happens at the level of expressions, behavior, verbalizations, it's very comprehensive. But it takes a lot of time and there are a lot of things to keep in mind as well.

Even before that, there's another approach that is more time efficient but not more effective, it's a little less precise, much less accurate, but in general it's convenient to get the project design on track before you go to the playtest stage. Even if you do playtesting, you should be able to do it in the early stages of designing your game before it is even made interactive. This first methodological approach is called heuristic evaluation. Heuristic evaluation is what we're going to talk about in this first podcast. Heuristic evaluation, what is it? It's about using expert recommendations that have been approved and proven by scientific approaches. What is a scientific approach? It is a methodology that will allow others to repeat a hypothesis, an experimentation protocol, the collection of objective data and the interpretation of results. The methodology is the recipe that articulates these different notions to arrive at an interpretation that is, as a result, incontestable. Because other people will be able to reapply this same recipe and perhaps find different things. It is important to understand that scientific methodology does not prove anything, because in its purpose it is a method of discussion that is always valid, that always leaves the door open for experts to answer each other and refine the interpretations that can be made.

So the heuristics, I repeat, are very general recommendations, supported by scientific methodology. This means that they are recommendations that are always a little bit biased, that will always need to be improved because scientific research will continue to advance and will become more and more refined and adapted to different contexts. But these are not recommendations that are pulled out of a hat by any expert who would say "well, this is how we're going to do it".

This is going to become very problematic quickly because you will soon find two experts who will tell you completely contradictory things about your project. You need a methodology that can determine which one seems the most reliable, provided you understand it, read a little bit about the context and the experimental protocol. This takes time, and you have to be careful because when it's too time-consuming, it becomes counter-productive for an industrial production approach. However, it can help to start off on the right foot. So, to facilitate this process and save time, I'm going to rework the template document that I use to teach. Thus, it could perhaps also be useful to others who wish to set up heuristic analyses.

The ultimate goal of this kind of approach is the video game. You have to understand that the production process of this video game is a big funnel. That is to say that at the end, you have only one video game. In this process, you are most likely not going to be alone. Most of the time a video game is the result of teamwork. Even independent people in general will have to resort to a few collaborators here and there. So it's very rare to have video games made by only one person. Of course it exists. Except as soon as you have several people, conflict is inevitable. Not all the ideas of the different people involved in making this video game are going to end up in the final project. So before you get to the end of the funnel, which is the making of the video game, the thing that's going to be there beforehand is the scheduling tool. It can be Taiga, it can be Gira, there are a lot of them. The tool that you're going to use to determine who is going to perform X or Y tasks.

The goal of UX Design's ergonomic approach is video games. Before that, of course, it's a question of being able to have an impact on the task distribution tool, the planning tool to know which person is going to take care of which modification, or which task to improve the game. That's very good, but it's not enough yet because you're probably going to see a lot of problems as you go along. And of course not all of them can be transcribed as tasks in the planning tool. There are priorities to manage, of course, so there are only certain priorities that will be put into the planning tool. And what do we do with the rest? There is the risk that it will be lost somewhere. It's important to keep in mind that when we talk about planning management tools, we're talking directly about tasks to be done.

The problem with the tasks to-do concept is that we lose visibility on the nature of the problem. This means that if you were doing a test or an evaluation methodology, you would have observed problems, and you would have made interpretations of them. From these interpretations, you would have derived tasks to be done by different people. There's a whole stream of reasoning that will go out the window if you go directly into the scheduling tool. Before that, we go a little higher up in our funnel. It's important to have a test report document that explains this chain of processes that leads to recommendations: so-and-so is going to do this task, so-and-so is going to do that task.

This is the document that I propose in order to help you make a test and analysis report that can serve you more efficiently. To begin with, it is important to have a good nomenclature. You're going to have several versions of tests, several test reports and having incorrect names could potentially lead to time wasted pouring over your document archives, where you wouldn’t know which document corresponds to what. Each team has its own classification nomenclature. There is one that I particularly like. It's not necessarily the best but it consists of systematically naming the documents with the year, for example the year 2020 dash month (so 04) dash - day (so 17) and then over time I would say "Ah! The podcast is old. It will have to be redone".

This way, you will be able to identify the old test reports and the most recent ones quite quickly. It's not enough to write the year-month-day and then the project to be tested. You can be in teams where there are several projects. The version of the project you are testing is very important. Indeed, you can imagine testing two different versions of a game on the same day. You have to know how to distinguish them efficiently. This is just at the level of the nomenclature to name your document. It is important, as soon as you are going to have several documents, to respect the nomenclature. That way, your ability to produce new ideas or new tasks will not be hindered.

Once you have a well named document, it is obviously important to have a clear content. Clear content means making short sentences. There are many people, myself included, at the beginning of my professional activity who are looking to make very convoluted sentences. The rule is simple: if you have a sentence that is more than one line long, it means that it will be a problem for the people who are going to read it. They won't like it. It's not a literary exercise, it's an exercise in conciseness and precision at the same time. If you need to go into detail, break down your ideas into several sentences. This is very important because if your report is full of good ideas but not very readable, you will not have the impact you want.

The reading exercise that your collaborators will do is going to be tedious and unpleasant for them. If you make short sentences, it will make the exercise much more pleasant and effective. Making short sentences is good. However, it is important to be precise. By precise, I mean that many of the student assignments I've seen contain "etc". For example, "the graphics are ugly, characters are ugly, enemies are ugly, etc.". That's too vague. Typically, you have to be able to define what is ugly more precisely. Is there a particular enemy you're concerned about? Is it an enemy? In what context? Is it in a particular animation? Does it refer to a scene? We'll see a little later how we can detail these kinds of things more precisely. We must avoid generalities at all costs because when it's too vague, it doesn't mean anything anymore.

If a problem is too general, I recommend that you break it down into several sub-problems. Another thing, don't mix information from several problems at once. Each problem should be well separated from the others so that you can follow the logical flow of your thinking. For example, don't tackle several manipulation problems together. Try to break them down one after the other. If you have a game that is played with keyboard and mouse and your reasoning on the one hand concerns the over-use of too many keys on the keyboard, on the other hand there are triple-key combinations to be done and thirdly, one should consider the playability of the keyboard alone or mouse alone, there are several problems there that you will be detailing. This will allow the identification of these problems and the simplification of the tasks that could result from them.

In this document, I have passed on the general remarks. You will of course indicate your first and last name. This is important in a professional approach because there can easily be several UX designers or several ergonomists. This allows you to know the origin of the document or documents. It seems obvious but when it is not named, it becomes complicated. This is also important for reasons of trust. You may have colleagues who are pretty sure of the good quality of the content and then others for whom it doesn't always go as expected. Don't forget this obvious little detail. Don't forget to put the date even in the document, even if it repeats with the naming nomenclature of your file. You should include the name of the tested project, the version of the tested project and also explain the problem importance classification that you intend to use.

It may be a bit surprising at first for students but the idea is that not all problems are necessarily going to be equally serious: for every 100 problems you manage to identify, you can imagine that there are 70 really important problems if you have the time and the budget to fix them all. What can that be? It can be something not very important, the little nice animation that's missing but not necessarily essential, a little graphic effect if your mouse passes over a button and you want there to be a particular effect when you press or release the button. Small things that can be found as standard in some interfaces but that won't necessarily greatly hinder the usability of your interface.

There may be more serious problems. For example, the game doesn't have a bug, it works well, but there's a little something missing that prevents the player from having fun. It may be a control that is uncomfortable to use, or it may be an interaction that is a bit "heavy". It could be something that is displayed on the screen but is hard to read because it's too small or oddly worded, or mistakes. All these things are not bugs that will crash the game. However, it is important to fix them. There are different possible severities but we understand that they are more important and of higher priority than the other problems I mentioned before.

The other problem, of higher importance, is obviously the big bugs, the big crashes. They are not necessarily the only blocking points in a video game. It can be the player who bugs, who arrives in a game situation and doesn't understand anything anymore. You can say that functionally the game works, that there are no bugs. That's fine, but if all the players are bugging and don't understand why "I can't get out of this room or that room? "or "what do I have to do to continue my quest?”, then there is some critical information or element of understanding missing. You may decide to leave it there to see if the players figure it out on their own, but it could be awkward. So a classification here again: each company has its own. There are standards that exist but there are many. I like to use the A, B, C, D classification.

Typically includes :

  • A: Big blocking point or crash. It is to be solved with high priority

  • B: It's not a big crash or a big blocking point, but it's something that greatly diminishes the player's satisfaction. It's typically full of bad handling that exasperates the player, readability problems that make it work but it's really tedious; bad feedback on some information that makes it work but there you go...it's something that causes a lot of frustration. You have to solve it for the overall quality of your game. It's quite essential to consider these problems.

  • C: I consider these to be important but not critical problems, so there are plenty of them. Typically in C, is all you could put in the polish of the game to make it better but it's not necessarily essential. Be careful with the polish elements to improve your game. There can be a lot of them. We can consider that in many projects it can easily take 50% of the development time. It's something that's going to take a lot of effort, making a lot of mechanisms of all kinds, and this mass of polishing mechanisms is invisible at the beginning when you're starting a game design, so it's very important to plan time in your schedule to be able to deal with as many of these kinds of problems as possible. It's very important to include time in your planning to be able to deal with as many of these problems as possible. These problems are not considered, but as you go through the tests and analyses you're going to do, there are lots and lots and lots of them that will come up and you'll feel like you're being overwhelmed and falling behind, which brings you to the last category of problems.

  • D: You can keep in mind problems that are certainly interesting but for which you definitely won't have time to deal with. Typically, you leave them aside for a next follow-up. When you take a game back, if you take it back, if you do a sequel, you'll take those issues back into your mind when you start designing your project so that it has better quality right away.

I like to use the A, B, C, D classification (from the most critical problems to the most optional).

This concludes our first part on the podcast dedicated to heuristic evaluation. In a second part, we will look more specifically at the information you will have to fill in your report regarding each of the problems you will be able to identify. See you soon.

Thank you for listening to this podcast, I invite you to subscribe so that you don't miss the next episodes. If you want to know more about me, I invite you to consult my Linkedin profile. If you would like some support to implement these concepts and tools in your teams and projects, you can call on my services as a UX Design consultant. Looking forward to it!


Written transcription in french : Guillaume Le Négaret

Correction in french : Ngala Elenga

Traduction : Cynthia Lee

Comments


bottom of page