Scott Maclean

Scott Maclean
  • Title: Principal and Director - Data Science
  • Company: Nulink Analytics


Scott has an extensive (35+ years) background in market research and advanced analytical analysis, including experience in fields such as new product/service development and assessment, market segmentation, choice modelling, pricing analysis, market tracking, brand positioning, public policy assessment and evaluation, feasibility research, advertising testing/tracking, market modelling/forecasting and customer satisfaction research.

His background includes positions as Marketing Science & Research Director with Research International in Melbourne, Frankfurt and London, Head of Advanced Analytics with Lewers Research and, for the past 10 years, Principal and Director – Data Science with Nulink Analytics.

He has published and presented a large number of papers on a range of topics and has been a regular speaker at seminars and meetings within the marketing and social research industry.  He has delivered full day workshops for The Research Society on Choice Modelling, on Latent Class Analysis, and on use of the Q software.

Scott’s qualifications/memberships include:

– Bachelor of Science – mathematical and survey statistics, Monash University (B. Sc. (Hons))

– Master of Applied Science (by thesis) – statistical modelling, Univ. of Melbourne (M. App. Sc.)

– Zertifikat Deutsch als Fremdsprache (ZDF), Goethe Institut

– Fellow and QPR Member – The Research Society



Segmentation of markets has been the ‘Holy Grail’ in market research for more than many decades.
Historically, and conventionally, this has involved the application of metric algorithms such as Cluster Analysis.
These algorithms, however, impose significant restrictions on the type of data that can be analysed, e.g. response scales typically need to be regarded as continuous and/or equal- interval, and ‘distances’ between cases need to be regarded as Euclidean, or some variant thereof.
Latent Class Analysis (and Mixed Mode Cluster Analysis) algorithms are now widely available, and much more forgiving in the demands they make of the data, even to the point of natively dealing with missing data. Metrics such as the Bayesian Information Criterion (BIC) provide statistically robust guidance as to the number of segments to retain.
In effect, almost any computational entity dealt with by market researchers (now including text data !) can be analysed using these more modern approaches.
The presentation will show examples selected from segmentation using statement batteries combined with socio-economic/demographic variables, ordered categorical questions (e.g. maxdiff batteries), and conjoint/choice models, all of which yield statistically rigidly derived and sustainable insights well beyond those available from conventional approaches.


Skip to toolbar