Home BMS Rural and Urban Advertising – BMS Notes

Rural and Urban Advertising – BMS Notes

Rural and Urban Advertising

Concept testing is the process of using surveys (and occasionally qualitative methods) to assess consumer acceptance of a new product idea prior to the introduction of a product to the market. It should be distinguished from pre-test markets and test markets, which may be used at a later stage of product development research. Contrary to what is occasionally done, concept testing should not be confused with advertising, brand, or package testing. Concept testing concentrates on the core idea for the product, removing any frills and hyperbole that come with promotion.

It is crucial that the tools (questionnaires) used to evaluate the product are of the highest calibre. Otherwise, measurement error could skew the results of surveys used to collect data. This increases the complexity of the testing procedure’s design. Empirical tests shed light on the questionnaire’s quality. One way to accomplish this is by:

carrying out cognitive interviewing

A researcher can confirm the validity of cognitive interviewing by questioning a subset of possible respondents about how they understood the questions and used the questionnaire.

conducting a brief pretest of the survey with a small portion of the intended audience.

Findings might help a researcher identify mistakes like missing inquiries or logical and methodological mistakes.

estimating the questions’ measurement quality.

For example, test-retest, quasi-simplex, or multitrait-multimethod models can be used for this.

estimating the question’s measurement quality.

Survey Quality Predictor is a software that can be used for this (SQP).

The idea generating stage of the new product development (NPD) process involves concept testing. There are various ways to approach the idea generation phase of concept testing. Concepts might occasionally arise accidentally as a result of technical advancements. Other times, concept generation is done consciously; brainstorming meetings, problem-identification questionnaires, and qualitative research are a few examples. While qualitative research can shed light on the variety of responses that consumers can have, quantitative idea-test surveys are a better tool for predicting the likelihood that a new concept will succeed.

Concept-screening surveys may be necessary in the early stages of concept testing because a wide range of alternative concepts may exist. Concept-screening questionnaires are a rapid way to reduce the number of possibilities available, but because ideas interact, they don’t offer as much depth of information as normative databases and can’t be compared to them. Monadic concept-testing surveys are necessary to gain more understanding and to make decisions about whether or not to undertake more product development.

Concept testing questionnaires are often classified as sequential monadic, comparative, or monadic. The terms mostly describe the ways in which the concepts are presented:

1.) Monetarist.

The idea is assessed separately.

2.) Monadic sequence.

Several topics are assessed sequentially (often randomised order).

3.) In contrast.

Ideas are displayed side by side.

4.) Monadic in origin.

Ideas are presented consecutively at first, and subsequently side by side.

The majority of idea tests should be conducted using monotonic testing. Biases and interaction effects are minimised. One test’s results can be compared to those of earlier monadic tests. One may create a normative database. But each has a unique purpose, and it all depends on the goals of the study. Since there are many ramifications for how the data are interpreted, it is advisable to leave the choice of which method to employ to seasoned research professionals.

Copy testing is a specific area of marketing research that assesses the impact of advertisements by looking at consumer behaviour, feedback, and replies. Alternatively referred to as pre-testing, it may cover all media platforms, including as print, radio, television, outdoor signage, the internet, and social media.

One particular subset of digital marketing that is tied to digital advertising is automated copy testing. This entails gathering information from actual consumers and employing software to distribute copy versions of digital ads in a live setting. The Z-test is typically used by these automated copy tests to assess the statistical significance of the findings. The marketer should utilise a new copy variant if it outperforms the baseline in the copy test to the appropriate degree of statistical significance.

Features

A group of 21 top advertising agencies, including McCann Erickson, N. W. Ayer, D’Arcy, Grey, Ogilvy & Mather, J. Walter Thompson, Needham Harper & Steers, and Young & Rubicam, published a public document in 1982 outlining the PACT (Positioning Advertising Copy Testing) Principles that make up an effective copy testing system. As per PACT, a quality copy testing system needs to fulfil the subsequent requirements:

  • gives measurements that are pertinent to the advertising’s goals.
  • needs consent about the use of the results prior to each individual test.
  • offers a variety of measurements, as most single measurements are insufficient to evaluate the effectiveness of an advertisement.
  • based on a model of how people react to communications, which includes how they perceive, understand, and react to stimuli.
  • enables the question of whether the advertising stimulus needs to be shown more than once to be considered.
  • acknowledges that a piece of copy may be evaluated more firmly the more finished it is, and it is necessary, at the very least, to test alternative executions with the same level of finish.
  • offers safeguards against the exposure context’s biassed effects.
  • considers the fundamentals of the sample definition.
  • Validity and dependability are demonstrated.
  • Measurements used in copy tests

Recall

Burke’s Day-After Recall (DAR), the most popular copy testing tool in the 1950s and 60s, was used to gauge how well an advertisement might “break through” to the customer and stick in their long-term memory. After Procter and Gamble implemented this metric, it became a standard practise in research.

Validation efforts conducted in the 1970s, 80s, and 90s revealed no connection between memory scores and real sales (Adams & Blair; Blair; Blair & Kuse; Blair & Rabuck; Jones; Jones & Blair; MASB; Mondello; Stewart). For instance, Procter and Gamble examined 100 split-cable tests spanning ten years and discovered no correlation between sales and remember scores (Young, pp. 3–30). Furthermore, Leonard Lodish of Wharton University examined test market data in even greater detail but was still unable to discover a connection between recall and sales.

A reexamination of the “breakthrough” measure also took place in the 1970s. Consequently, a crucial differentiation was established between the creative execution’s ability to capture attention and the advertisement’s level of “branding.” As a result, branding and attention became distinct metrics.

Persuasion

The research industry started relying on a measure of persuasion as an accurate sales forecast in the 1970s and 1980s after it was shown that DAR was a poor predictor of sales. Researchers like Horace Schwerin, who noted that “the obvious reality is that a promise can be well remembered but entirely useless to the prospective buyer of the product the marketer offers is addressed to the wrong need,” played a significant role in this change. Similar to DAR, the industry standardisation of the ARS Persuasion measure, also called brand preference, was achieved through Procter and Gamble’s endorsement of it. Even yet, recall ratings were still given in copy testing reports with the idea that the important metric was persuasion.

Diagnostic

Optimizing is the primary goal of diagnostic measures. Recognizing diagnostic metrics can assist advertisers in finding innovative ways to enhance executions.

Non-Verbal

Since it was thought that many of a commercial’s impacts, such as its emotional impact, could be hard for respondents to describe or rate verbally, nonverbal measures were created. In fact, a lot of people think that the commercial’s impacts might be occurring at a level lower than consciousness. “There is something in the exquisite sounds of our favourite music that we cannot explain and it touches us in ways we cannot express,” says researcher Chuck Young.

In order to evaluate these nonverbal cues physically, researchers monitored participants’ brain wave activity while they watched commercials in the 1970s (Krugman). Others conducted eye tracking, voice pitch analysis, and galvanic skin reaction experiments. These attempts were not extensively embraced, partly due to technological constraints and partly to the low cost-effectiveness of what was generally considered to be scholarly rather than practical research.

Moment-by-moment methods began to be experimented with in the early 1980s as a result of a shift in analytical perspective from viewing a commercial as the fundamental unit of measurement to be scored in its entirety to viewing it as an organised flow of experience. The most widely used of these was the dial-a-meter response, which asked participants to indicate their judgement of what was now on screen by turning a metre, in degrees, toward one end of a scale or the other.

In recent times, research firms have begun to gauge the emotional impact of copy using psychological tests like the Stroop effect. Because such reactions happen outside of awareness, through changes in networks of thoughts, ideas, and images, these tactics take use of the premise that viewers do not know why they react in a particular way (or that they reacted at all) to a product, image, or advertisement.

ALSO READ