Research Foundation
The published science behind the methodology
Autilogix™ applies established findings from cognitive science, behavioral psychology, AI safety research, and consumer protection law. The sources below represent the published academic and regulatory record that informs how the platform identifies cognitive errors, reasoning failures, and deceptive patterns. This is not an exhaustive literature review — it is the working reference library the methodology draws from.
No endorsement implied
The researchers, institutions, and publications listed on this page have no affiliation with Autilogix LLC or Thalamus LLC and have not reviewed, approved, or endorsed any Autilogix™ product, methodology, or analysis output. Citations are to published works in the public record. All interpretations and applications of cited research are solely those of Autilogix LLC.
FTC and Consumer Protection
Federal Trade Commission. (1983). FTC Policy Statement on Deception. Appended to In re Cliffdale Associates, Inc., 103 F.T.C. 110 (1984).
https://www.ftc.gov/public-statements/1983/10/ftc-policy-statement-deceptionPrimary source for the three-part deception standard applied in Arbitir™ content analysis.
Federal Trade Commission Act, 15 U.S.C. §45 (Section 5). Prohibition of unfair or deceptive acts or practices.
https://www.ftc.gov/legal-library/browse/statutes/federal-trade-commission-actThe statutory authority cited in all FTC_LIKELY_MET determinations alongside R-001.
Federal Trade Commission. (2023). Guides Concerning the Use of Endorsements and Testimonials in Advertising (Updated). 16 C.F.R. Part 255.
https://www.ftc.gov/legal-library/browse/rules/endorsement-guides-16-cfr-part-255Updated guidance that AI-generated content must be disclosed — directly applicable to AI output analysis.
Richards, N. M., & King, J. H. (2014). Big data ethics. Wake Forest Law Review, 49, 393–432.
Documents differential treatment by user data profile as an ethical and deceptive practice concern.
AI Training, Sycophancy, and Policy Layers
Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. Anthropic. arXiv:2212.08073.
https://arxiv.org/abs/2212.08073Documents the Constitutional AI value encoding process — primary citation when policy layer findings fire on AI-generated content.
Ouyang, L., et al. (2022). Training language models to follow instructions with human feedback. NeurIPS 2022. arXiv:2203.02155.
https://arxiv.org/abs/2203.02155InstructGPT paper documenting RLHF methodology — primary citation for emergent sycophancy mechanism across all RLHF-trained models.
Perez, E., et al. (2022). Red Teaming Language Models with Language Models. DeepMind. arXiv:2202.03286.
https://arxiv.org/abs/2202.03286Documents systematic patterns of AI refusal and sycophantic behavior under adversarial testing across large language models.
Sharma, M., et al. (2023). Towards Understanding Sycophancy in Language Models. arXiv:2310.13548.
https://arxiv.org/abs/2310.13548Dedicated sycophancy research — primary citation when validation bias findings fire at high severity in AI output analysis.
Casper, S., et al. (2023). Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. arXiv:2307.15217.
https://arxiv.org/abs/2307.15217Comprehensive documentation of systemic limitations in RLHF training across all large language models — including sycophancy, reward hacking, and policy misalignment as structural properties of the training method, not individual model failures.
Wei, J., et al. (2022). Emergent Abilities of Large Language Models. arXiv:2206.07682.
https://arxiv.org/abs/2206.07682Documents emergent behaviors in large language models that were not designed but arose from scale — supports competence deflection analysis.
Cognitive Science — Human Reasoning
Stanovich, K. E. (2011). Rationality and the Reflective Mind. Oxford University Press.
Primary academic source on identity-protective cognition and myside bias — theoretical foundation for the human distortion taxonomy.
Stanovich, K. E., West, R. F., & Toplak, M. E. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22(4), 259–264.
Peer-reviewed documentation of myside bias as distinct from general intelligence.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
System 1 / System 2 framework — foundational source for confirmation bias and availability heuristic findings.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291.
https://doi.org/10.2307/1914185Foundational research on how humans evaluate risk and make decisions under uncertainty.
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
Canonical reference for heuristic-driven reasoning failures including anchoring and representativeness.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74.
Argumentative theory of reasoning — documents myside construction as an evolved feature of human cognition, not individual pathology.
Sharot, T., Korn, C. W., & Dolan, R. J. (2011). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience, 14(11), 1475–1479.
https://doi.org/10.1038/nn.2949Sharot asymmetric belief updating research — cited when cognitive dissonance management findings fire.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.
https://doi.org/10.1037/1089-2680.2.2.175Comprehensive documentation of confirmation bias mechanisms across contexts.
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407–424.
Documents identity-protective cognition — how strongly held beliefs override analytical reasoning.
Kahan, D. M. (2017). Misconceptions, misinformation, and the logic of identity-protective cognition. Cultural Cognition Project Working Paper Series No. 164. Yale Law School.
https://doi.org/10.2139/ssrn.2973067Extends identity-protective cognition to misinformation resistance — cited in motivated reasoning findings.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134.
https://doi.org/10.1037/0022-3514.77.6.1121Dunning-Kruger effect — cited in competence calibration and overconfidence findings.
Morewedge, C. K., et al. (2015). Debiasing decisions: Improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140.
https://doi.org/10.1177/2372732215600886Evidence that debiasing training produces measurable improvement — supports the platform's training methodology.
Sellier, A. L., Scopelliti, I., & Morewedge, C. K. (2019). Debiasing training improves decision making in the field. Psychological Science, 30(9), 1371–1379.
https://doi.org/10.1177/0956797619861429Field evidence for debiasing effectiveness — direct empirical support for cognitive training producing real-world improvement.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.
https://doi.org/10.1257/089533005775196732Cognitive Reflection Test research — cited in analytical vs. intuitive reasoning mode classification.
Stanovich, K. E. (1993). Dysrationalia: A new specific learning disability. Journal of Learning Disabilities, 26(8), 501–515.
https://doi.org/10.1177/002221949302600803Establishes rationality as distinct from IQ — foundational to the platform's position that reasoning is a trainable skill independent of general intelligence.
Data Infrastructure and Publication Bias
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), e124.
https://doi.org/10.1371/journal.pmed.0020124Foundational publication bias research — cited when unattributed "studies show" claims lack replication context.
Mlinarić, A., Horvat, M., & Šupak Smolčić, V. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica, 27(3), 030201.
https://doi.org/10.11613/BM.2017.030201Documents positive publication bias mechanisms — cited in research claim analysis when negative results are structurally absent.
This reference library is maintained as a living document. Entries are never removed — only deprecated if a source is retracted or superseded. Last reviewed: 2026.