top of page
Claire Tills

We need more *Quality* Research

I've said it many times and I will continue to say it: I want people to do more research on topics related to information security. The last thing I posted here was a call for more qualitative research (based on interviews, textual analysis, observation, focus groups, etc.) in this field. Since then, I've come to a realization. It's more important for the research being conducted and spread around the community to be...decent research. I've seen a lot of questionable research (to put it diplomatically) getting a lot of coverage and attention and that really troubles me.


Not all research is created equally. Good research is methodologically solid and is presented to the world accurately. The second part is usually trickier but many research projects trip over the first hurdle as well, so let's start there. *Quick disclaimer* this is a high-level discussion of research methodology intended to help you evaluate research as you are reading about it.


What does “methodologically solid” mean? It basically means research conducted in a way that answers the research questions accurately – the results of the research represent the reality of the phenomena being studied. A LOT of effort goes into producing methodologically sound research. You’re attempting to avoid any bias; determine how much data you need to collect to feel confident that you’re answering the question(s) completely; recruit/collect a representative sample (more on that later) all while hoping you find something meaningful in the data.


Many research studies, especially ones you read about in the news or on social media, are conducted to prove a point, rather than understand a phenomena. So, instead of asking open-ended questions like “How do fear appeals modify end user behavioral intentions?” the research asks more closed questions. This sort of research is often done to sell a product or promote a service and it is fuel for clickbait (anyone remember that corona beer “study”). Methodologically, this type of research uses leading questions and doesn't explore context or extenuating circumstances. It also uses haphazard sampling methods, ensuring it cannot be reliably generalized to the broader population but that doesn't stop authors, reporters, and people on social media from generalizing.


Now, let’s talk about sampling. This is probably the number one indicator I look for when evaluating a study floating around on social media or in the news. I ask "did the researchers survey or interview the right people to answer the questions they want to understand?" Sampling is also the bane of the researcher. Finding enough people to participate in your project who fit the criteria can be tough and many times, researchers fall back to using convenience sampling. This is why so many studies are based on college students – researchers are college professors and therefore have access to college students in large numbers.


Using a sample that doesn’t accurately represent the people you want to understand means that you aren’t really learning what you want. If you’re asking about buying behavior, the sample needs to represent people in a reasonable position to buy the given product. In academia, this is referred to as generalizability or external validity and its absence should be an immediate flag. You should see how the researchers discuss their findings given the sampling issues. Do they admit the limitations of the sample and contextualize their findings accordingly? Or do they draw conclusions about groups that are not represented in their sample without qualification?


Secondarily to sampling the right people, is sampling enough people. If I read a source that only surveyed a dozen people, I'm likely going to stop reading. There's no definitive rule for how many participants are enough but there's a basement. When I do an interview project, I want to speak to at least ten people. Surveys usually call for more participants because of the types of questions you're able to ask in that format. The ideal sample size is dependent on the target population (the group of people you want to understand) and the types of questions you want to answer. The larger the population and the broader the questions, the more participants you'll need.


Research that breaks these principles is typically not published in an academic journal or presented at an academic conference. It wouldn't make it through peer-review (there are exceptions) and academic outlets would require researchers to address limitations. However, that doesn't stop the findings from bad research from spreading. Ideally, findings from poorly done research would just fade into the ether but often they become part of "public knowledge," at least temporarily. This comes down to how the findings are publicized and written up.


That’s the topic for my next post. I’ll explore the stickiness of bad research and give some examples.


Lavrakas, P. J. (2008). Encyclopedia of survey research methods. Sage Publications. https://methods.sagepub.com/Reference/encyclopedia-of-survey-research-methods

  • twitter

©2024 BY CLEAR SECURITY COMMUNICATION. PROUDLY CREATED WITH WIX.COM

bottom of page