Created on
1
/
19
/
2026
,
23
:
16
Updated on
3
/
19
/
2026
,
23
:
35
Communication Studies (iii): The Architecture of Modern Perception
From the Engineering of Consent to the Algorithmic Rhythms of Mass Filtering

Preface: communication studies, part 3.
1) People Live in Their Imagined World
在 19 世纪末之前,人类当然早就存在各种形式的“传播”,但那还不是现代意义上的传播学。宗教布道、王权公告、谣言扩散、口耳相传始终存在,却缺乏一个关键前提:一种可以被规模化运作、重复使用、并在统计意义上呈现出可预测效应的“大众影响机制”。工业革命改变的并不仅是生产方式,而是信息与社会结构之间的关系。城市化把原本分散在熟人网络中的个体,压缩进高度密集、长期共处的陌生人环境。报纸、电报与廉价印刷技术,使信息第一次能够在短时间内同步覆盖数以百万计、彼此并不相识的人群。这一变化并非单纯意味着“传播更快”,而是引入了一个全新的问题空间:谁在控制信息的生产与分发,谁在被影响,为什么同一条信息能够在群体中引发高度一致的情绪与反应。
此前的传播形式,其影响高度依赖具体情境。一次宗教布道是否奏效,取决于神职人员的个人威望、仪式环境的神圣性以及听众的信仰背景;一道王权公告能否落实,依赖的是暴力、等级和地方权力结构,而不是说服本身;谣言的扩散路径则高度不稳定,速度不可控,内容在传播过程中不断变形。这些形式并非没有影响力,而是难以被拆解为稳定变量,也难以在跨场景中重复验证。你无法系统地回答“如果我用同样的信息结构,在另一个城市、对另一群陌生人再说一遍,是否会产生相似效果”。在缺乏这种可重复性的条件下,理论化几乎不可能成立。
工业化社会带来的变化在于,城市不只是人口规模的扩大,而是陌生人第一次在同一时间、同一信息空间中被持续地连接在一起。在乡村社会,人们主要通过熟人关系理解世界;在城市中,大量彼此不认识的个体,却会在同一天读到同一份报纸、看到同一张海报、被同一条新闻激怒或安抚。正是在这种条件下,一个全新的对象开始浮现——“大众”。它既不是共同体,也不是组织,而是一种在心理层面可以被同步触发的陌生人集合。
报纸、电报与廉价印刷的真正意义,也不只是提高了传播效率,而是使信息第一次脱离了具体的人际关系而独立存在。一条新闻不要求你认识记者,也不要求你认识其他读者,却能够在同一时间影响成千上万的人。这种条件下,传播开始呈现出某种可被观察和比较的规律性:相似的信息结构、叙事框架和情绪线索,往往会在不同地点引发相似的群体反应。恐惧、愤怒、民族情绪与道德恐慌,开始表现出跨个体的同步性。正是在这一结构变化中,传播第一次成为一个必须被系统研究的对象。
随之出现的问题并非抽象的哲学困惑,而是直接关联治理、商业与战争的现实问题。如果大众可以被同时触发,那么是谁在设定触发条件?为什么某些叙事能够迅速扩散,而另一些则被忽略?为什么同样的事实,通过不同的呈现方式,会引发截然不同的社会反应?传播学正是在回应这些问题的过程中逐渐成形的。它的初衷并非“让人更好地彼此理解”,而是理解并管理一种全新的社会风险。
从这个意义上说,传播学的历史起点,并不在于“传播存在了多久”,而在于社会第一次清楚地意识到:公众意见可以被系统性塑造,而且如果这种力量不被理解,就可能反过来威胁秩序本身。从这一刻起,传播不再只是文化的附属现象,而成为一种需要被研究、建模并加以控制的力量。传播学因此诞生,并且从一开始,就与权力和恐惧紧密相连。这也解释了为什么传播学的起点并不浪漫。它并非源于“人类终于想要互相理解”,而是源于国家、资本与统治集团对失控大众的深层焦虑。罢工、骚乱、革命与金融恐慌被反复发现与信息扩散高度相关。信息不再只是反映现实,而开始在结构性意义上参与制造现实。
在这一背景下,Edward Bernays 的出现并非偶然。Bernays 并不是单纯的理论家,而是直接进入了国家与企业舆论运作现场的人。他曾参与美国政府在第一次世界大战期间的舆论动员体系,后来又以公共关系顾问的身份,参与塑造美国国内对外部政治事件的认知环境,包括在危地马拉问题上为特定政治行动营造舆论正当性。他的意义不在于“一人推翻政权”,而在于展示了舆论塑造如何成为现代权力运作的一部分。他在 1923 年出版的《Crystallizing Public Opinion》,具有明确的历史分水岭意义。这本书几乎是公共关系作为一种职业,第一次为自身提供系统理论说明。Bernays 在其中明确否认公众意见是自然生成的,而提出舆论需要被“结晶”。所谓结晶,并非捏造事实,而是对分散、模糊且情绪化的社会态度进行组织、聚焦与定向。在这本书中,他仍然保留着一种技术人员式的克制姿态,将公共关系描述为协调社会关系的专业劳动,努力维持其中性与公共利益导向。这也是他最为温和的一部作品。
1928 年出版的《Propaganda》则显著改变了语气。如果说前一本还在解释“我们在做什么”,这一本则直接承认“这就是民主社会的现实”。Bernays 在书中公开指出,现代民主不可能没有宣传,真正的问题不在于是否操控,而在于由谁来操控、如何操控。他将引导与操控合理化为民主运作的必要条件。这本书的危险性不在于其道德立场,而在于其高度坦率:传播在这里被明确理解为一种权力技术,而不再是沟通的艺术。战后发表的《The Engineering of Consent》则进一步推进了这一方向。这一阶段,Bernays 的关注点已经不在于正当性辩护,而在于系统化运作。“同意的工程”这一概念本身就清楚地表明,公众态度被视为可以被设计、测试、调整和长期维护的对象。他将舆论塑造类比为工程项目,强调流程、专家协作与长期规划。这标志着传播从一种策略性行为,升级为一种制度能力,也预示了后来政治顾问体系、竞选机器与企业品牌系统的全面出现。
如果说 Bernays 代表的是执行与操作,那么 Walter Lippmann 则代表了冷静而尖锐的诊断。Lippmann 并不是技术型传播学家,而是站在新闻与政治实践一线,观察现代社会如何在信息中失去稳定性的人。他出生于 1889 年,成长于美国迅速工业化、媒体高度集中的时代,接受过系统的哲学与政治训练,并长期担任主流媒体的政治评论员。这一点至关重要:他并非在书斋中抽象想象“公众”,而是在真实的新闻生产、政治博弈与战争叙事中,反复看到公众如何被信息牵引、动员乃至误导。第一次世界大战对他具有决定性意义。战争期间,美国政府大规模动员舆论,制造敌我叙事、情绪共识与道德正当性。Lippmann 亲眼看到,一个自认为理性的民主社会,如何在极短时间内被统一叙事牵着走。这一经验并未让他兴奋,而是使他高度警觉。他由此意识到,现代社会的复杂性已经远远超出普通个体通过直接经验理解世界的能力。
1922 年出版的《Public Opinion》中,他提出了后来被反复引用的“拟像环境”概念。这个概念并不是在指责媒体撒谎,也不是贬低公众智力,而是在指出一个结构性事实:现实本身已经大到无法被个体直接经验。政治、经济、战争、金融与国际关系,大多发生在个人生活半径之外,但人们仍然必须对这些事务作出判断。在这种条件下,人只能通过“中介现实”生活。新闻标题、图片、统计数字、故事框架与道德标签,并不是现实本身,而是现实被压缩、剪裁和格式化后的版本。人并非根据世界本身行动,而是根据对世界的想象行动,而这种想象持续由媒介提供材料并被不断强化。Lippmann 将这一心理层称为“拟像环境”,强调它并不等同于现实,却在实践中取代了现实的地位。
关键在于,这一过程并不依赖阴谋。即便所有记者都绝对诚实,拟像环境仍然不可避免,因为新闻这种形式本身只能呈现现实的碎片,而公共事务需要系统性理解。这是一种认知压缩,而非单纯的歪曲。在此基础上,Lippmann 给出了一个冷酷却重要的判断:刻板印象并非道德缺陷,而是认知工具。人在复杂世界中必须先行分类,才能迅速定位立场。问题不在于人为什么会使用刻板印象,而在于这些刻板印象由谁提供、被如何强化、又服务于怎样的权力结构。因此,公众并非愚蠢,而是在理性地适应一个无法被完整理解的世界。民主真正的风险,不在于公众缺乏理性,而在于制度假装公众是在对“真实世界”作出判断。在这一点上,Bernays 与 Lippmann 的分歧变得清晰。Lippmann 关心的是这一结构性断裂意味着什么,而 Bernays 关心的是在断裂不可消除的前提下,权力应当如何运作。Bernays 并未误读 Lippmann,而是将他的诊断转化为操作逻辑。从这个意义上说,Bernays 是 Lippmann 思想最忠实、也最危险的继承者。Lippmann 的贡献不在于提供解决方案,而在于持续拆解现代社会的幻觉:公众理性、媒体透明、民主自然运作、国际秩序基于道德。这些幻觉一旦被戳破,人会感到不安,但制度才有可能变得更诚实。
2) Communication in the Age of Tiktok
如果把传播学从第一次世界大战之后一直拉到今天,它的演化并不是一条线性进步的知识史,而是一次次被权力需求、技术条件与媒介结构强行改道的过程。每一次新的传播形态出现,都会迫使传播学重写它到底在研究什么、为谁服务、以及默认把人当成什么样的对象。从一战结束到二战及冷战初期,传播学在美国语境中逐步形成一种高度“工程化”的研究取向。这一阶段并非一开始就拥有统一理论,而是在宣传实践、舆论管理与态度研究的需求推动下,逐渐聚焦于一个核心问题:信息是否能够被有效投放,并在统计意义上产生可预测的效果。宣传分析、态度改变研究与效果测量,构成了这一时期最重要的研究重心。大众往往被假定为可以被刺激、被影响、被调节的对象,媒介被理解为相对中性的传输管道,传播则被拆解为可分离、可操控的输入与输出环节。这种理解方式在二战期间以及冷战早期,与国家宣传、心理战和国内治理需求高度契合,并因此被持续强化。
在这一研究取向中,Harold Lasswell 的贡献并不在于提出了某一个具体理论模型,而在于他为传播研究设定了一种基本问题框架。1948年提出的“谁,通过什么渠道,对谁,说了什么,产生了什么效果”这一分析公式,将传播明确拆解为一个以“效果”为终点的过程。在这个框架里,传播首先是一种作用关系,而不是一种互动过程;它关心的不是意义是否被共同理解,而是影响是否发生、是否奏效。在这种设定中,受众只以“被作用者”的身份出现,其存在的意义体现在最终是否发生可测量的变化。理解并不是核心变量,变化才是。一旦传播被这样建模,对话在研究层面上就已经被边缘化了。因为对话意味着双方在互动中都有可能被改变,而在这种效果导向的框架里,改变被默认只允许单向发生。
更深层的问题在于,这种模型隐含地假设意义在传播开始之前就已经完成。意义被当作一种可以被编码、被传输、被投递的对象,而不是一种需要在互动中协商、依赖语境、并可能发生偏移的过程。只要发送端选对符号、渠道与节奏,并在受众身上触发了预期反应,传播就被视为成功。协商、歧义和不确定性并不是研究重点,而是需要被压缩、被管理的变量。因此,“效果”这个概念本身带有强烈的规范性。传播被理解为一种有既定目标的干预行为,而不是一个开放的公共过程。在这种逻辑下,只要目标被视为正当,受众是否真正理解并不重要。这套思路在政治宣传、广告和心理战中具有极高的操作性,但它并不适合解释任何需要公共讨论、意义协商和集体判断的社会过程。
进入20世纪中期,这一假设首次遭遇系统性挑战。1940年代以来的选举研究逐渐显示,大众并非完全被动,信息效果并不总是直接发生。人际网络、意见领袖与社会结构在传播过程中起到了显著的中介和缓冲作用。随后被概括为“有限效果”的研究取向,并非否认传播的影响力,而是指出影响并不是线性、即时和全能的。传播学在表面上变得更“温和”、更社会化,但其底层目标并未发生根本转变:不是放弃影响,而是通过更间接、更结构化的方式来实现影响。传播从“直接作用于你”,转向“通过你所信任的人与环境作用于你”。
几乎在同一时期,媒介本体开始进入传播学视野。Marshall McLuhan 在1960年代提出“媒介即信息”,构成了一次对既有传播假设的根本性挑战。他关注的并不是具体内容,而是媒介形态本身如何作为一种环境,重塑人的感知结构、时间经验和社会组织方式。媒介不再被视为中性的管道,而是被理解为主动塑造社会关系的力量。在麦克卢汉之前,传播学往往默认:同一内容通过不同媒介传播,本质上是同一信息,只是效率和覆盖范围不同。麦克卢汉否定了这一前提。他认为,不同媒介并不是“同一信息的不同包装”,而是引入了完全不同的感知结构。真正长期、深层、不可逆地改变社会的,并不是你在媒介中说了什么,而是媒介本身如何改变人理解世界和彼此关联的方式。
在这一意义上,“媒介即信息”并不是否定内容的重要性,而是指出:媒介带来的最关键影响,发生在内容被理解之前。媒介通过改变注意力分布、节奏感、参与方式和感官比例,先行塑造了人能够如何理解、能够理解到什么程度。等到人开始判断“是否相信”某个内容时,感知结构往往已经被设定。这一视角使麦克卢汉的判断在今天的平台环境中显得尤为前瞻。当媒介不再只是工具,而是成为人无法脱离的环境时,传播的权力就不再主要体现在具体说服内容上,而体现在媒介形态本身对生活方式的重写能力上。
进入互联网与平台时代,传播学并未彻底抛弃既有理论,但其核心假设开始明显失效。传播不再是线性的,而是网络化、递归化和高度反馈的。议程设置、框架、沉默的螺旋等理论并非在这一时期才出现,但它们必须在新的条件下被重新理解:在一个由平台控制可见性、排序与分发的环境中,谁拥有传播权力,已经不再是一个单纯的内容问题。在以短视频平台为代表的传播环境中,研究重点进一步发生转移。传播面对的不再只是“信息如何说服人”,而是“注意力如何被持续塑形”。内容并非被主动选择,而是被系统性推送;传播主体变得模糊,算法承担了分发与筛选的角色,却缺乏可见性和可问责性。传播单位从“信息”转向“行为反馈”:停留时长、复看、互动,比理解本身更重要。
在这种结构中,传播目标从态度改变转向习惯塑造。平台并不需要你相信某套稳定的解释框架,它只需要你形成可预测的使用路径。权力不再主要通过话语一致性实现,而是通过节律控制——控制你什么时候看、看多久、在什么情绪状态下看。由此,传播的主战场从公共说理滑向注意力管理,从意义生产滑向时间占用。不同甚至相互冲突的叙事可以在同一信息流中并存,只要它们都能抓住注意力。意识形态的一致性不再是必要条件,占据认知带宽本身就足以维持系统运转。
当传播通过节律而非意义运作时,公共讨论便面临结构性困境。人并不是被某个观点征服,而是被一种信息摄入方式长期重塑。观点可以频繁更换,但一旦注意力模式、耐心阈值和参与节奏被固定,能够被理解和认真思考的范围就会随之收缩。最终,占据时间会转化为塑造可思考的边界,而传播权力也随之完成了从“说服人相信什么”到“决定人如何存在于媒介环境中”的转移。
3) The Filtering System of Tiktok
继续深入分析后,可以把抖音理解为一种实时的行为筛选系统,而不仅仅是一个“内容平台”。这里的关键不在于平台如何评价内容本身,而在于它如何判断内容是否能够稳定触发某类用户行为。在这一框架下,视频更接近一种探针。它被用来测试的是:在特定用户结构中,是否会出现可预测、可重复的行为反应。系统关注的并不是你表达了什么观点,而是这些表达是否能在统计上形成稳定的行为信号。
从外部可观察的分发模式来看,抖音的新视频通常会先进入一个规模较小、但并非随机的用户样本中。这个样本更可能由在相似内容或刺激类型上表现出过明确反应的用户构成。此阶段的重点并不是“推荐”,而更像是一种压力测试:系统通过极早期的行为指标,判断是否值得继续投入更多注意力资源。在这一阶段,系统关注的往往是最基础、最冷启动的信号,例如用户是否在刷到视频的瞬间停留,是否出现短暂的观看延续。这类信号并不等同于完整观看或互动,但它们决定了视频是否具备进入下一轮测试的最低资格。如果这些早期指标明显低于基线水平,后续更复杂的数据往往不会再被重点考察。
从分发结果来看,这一过程更接近逐层放行,而非综合评分。系统并不需要判断视频“总体好不好”,而是持续评估它是否满足进入下一阶段测试的条件。一旦关键行为信号不足,分发往往会迅速停止,而不是逐渐减弱。这可以解释为何部分视频在早期数据尚可的情况下,仍会突然失去扩散。这一机制的一个直接后果,是早期样本的权重显著高于后期表现。视频的命运更多取决于最初一小部分用户的反应,而不是通过时间慢慢“证明自己”。一旦系统判断该刺激在统计上不具备扩展价值,后续即便存在潜在质量,也很难获得新的展示机会。
在通过初步测试后,分发逻辑并不会简单地扩大规模,而更可能转向差异化测试。也就是说,视频会被投放给结构不同、阈值更高的用户群体,以观察反应是否仍然成立。系统此时关心的,不是反应数量,而是反应在不同用户结构中的可迁移性。如果某种刺激只在低阈值人群中有效,而在更挑剔或更高成本行为人群中迅速失效,系统往往会将其识别为局部成立的模式,从而限制其扩散范围。这类内容并不会被清除,但通常会被稳定地限制在较窄的分发带中。相反,当内容在规模不大的情况下,却能稳定触发高成本行为,例如明确立场的评论或转发,系统往往会将其视为具有明确定位价值的信号。这并不意味着内容“更好”,而意味着它与某类行为形成了更强的绑定关系。对平台而言,这类可被复用、可被预测的刺激结构,往往比“整体评价尚可”的内容更具工程价值。
在这一层级,系统关注的核心并不是内容语义本身,而是内容是否稳定地引发某种状态变化。平台不需要完整理解观点逻辑,只需要识别:哪类用户,在何种刺激下,从一种行为前状态转移到另一种状态。情绪在这里并非目的,而是可被记录和利用的中间变量。一旦这种“内容—状态转换—行为反应”的映射关系趋于稳定,视频就不再被当作待评估对象,而更像是一个已知用途的工具。此后,它会被持续投放给最可能完成该行为的人群,而较少再被用于探索完全不同的用户结构。这种现象并非自然形成的回音室,而更接近精准投放的结果。
在更长期的尺度上,系统评估的重点会逐渐从单条内容转向账号整体。账号会被视为一个相对稳定的刺激来源,而单条视频则是该来源的样本。系统关心的问题不再是“这条视频如何”,而是“这个账号通常会制造什么样的行为反应”。当账号在统计上表现出稳定的输出模式后,分发就会更多依赖预测而非重新测试。符合既有模式的内容,往往起步更快、容错更高;而明显偏离既有刺激类型的内容,则可能因为不确定性过高而获得更少的测试机会。这也是为何账号转型在实践中往往成本极高,并非观众无法接受,而是系统需要付出额外学习成本。
在这种逻辑下,平台并不天然偏好所谓“正能量”或“负能量”,而更偏好反应曲线高度可预测的内容。立场清晰、结构稳定、输出模式一致的账号,更容易被系统信任;而模糊、摇摆、频繁试探不同刺激类型的表达,则更难被建模和放大。所谓反应曲线,可以理解为一段可被记录的时间序列:用户在第几秒停留,在何时产生情绪,在何时采取行动。系统并不判断这条曲线的价值取向,而判断它是否足够稳定,能否在下一次被准确预测。稳定性,而非道德或观点本身,是工程层面最核心的信任基础。从这个角度看,抖音更像一个筛选器,而不是一个舞台。创作者并不是单向地“表达”,而是在不断接受系统对其刺激稳定性的检验。能够持续扩散的内容,往往并非完全有意识地迎合,而是在无意中与这套筛选机制形成了高度兼容。
1) People Live in Their Imagined World
The Industrial Genesis of Mass Communication
Before the end of the 19th century, various forms of "communication" certainly had long existed among humanity, but that was not yet Communication Studies in the modern sense. Religious proselytizing, royal edicts, the spreading of rumors, and oral transmission always existed, yet they lacked a key prerequisite: a "mass influence mechanism" that could be operated at scale, reused, and present a statistically predictable effect. What the Industrial Revolution changed was not merely the mode of production, but the relationship between information and social structure. Urbanization compressed individuals, who were originally dispersed within acquaintance networks, into highly dense environments of strangers living together long-term. Newspapers, the telegraph, and cheap printing technology allowed information, for the first time, to synchronously cover populations numbering in the millions who did not know each other. This change did not simply mean "communication is faster," but introduced an entirely new problem space: who is controlling the production and distribution of information, who is being influenced, and why a single piece of information can trigger highly consistent emotions and reactions within a group.
The Shift from Context to Variable
Previous forms of communication were highly dependent on specific contexts for their influence. Whether a religious sermon was effective depended on the personal prestige of the clergy, the sanctity of the ritual environment, and the faith background of the listeners; whether a royal edict could be implemented relied on violence, hierarchy, and local power structures rather than persuasion itself; the diffusion paths of rumors were highly unstable, the speed uncontrollable, and the content constantly deformed during the transmission process. These forms were not without influence, but they were difficult to deconstruct into stable variables, and difficult to verify repeatedly across different scenes. You could not systematically answer: "If I use the same information structure to speak again in another city to another group of strangers, will it produce a similar effect?" Under conditions lacking this reproducibility, theorization was almost impossible to establish.
The Emergence of the "Mass" and the Independent Information
The change brought by industrialized society lies in the fact that the city was not just an expansion of population scale, but the first time strangers were continuously connected together at the same time and in the same information space. In rural society, people understood the world primarily through acquaintance relationships; in the city, a vast number of individuals who did not know each other would read the same newspaper, see the same poster, and be enraged or appeased by the same news on the same day. It was precisely under these conditions that an entirely new object began to emerge—the "Mass." It is neither a community nor an organization, but a collection of strangers that can be triggered synchronously at the psychological level.
The true significance of newspapers, the telegraph, and cheap printing was also not just improving communication efficiency, but making information exist independently of specific interpersonal relationships for the first time. A piece of news does not require you to know the journalist, nor does it require you to know other readers, yet it can influence thousands upon thousands of people at the same time. Under these conditions, communication began to present a certain regularity that could be observed and compared: similar information structures, narrative frames, and emotional cues often triggered similar group reactions in different locations. Fear, anger, nationalistic sentiment, and moral panics began to exhibit inter-individual synchronicity. It was precisely within this structural change that communication, for the first time, became an object that must be systematically studied.
The Birth of Communication Studies as Risk Management
The problems that appeared subsequently were not abstract philosophical puzzles, but realistic problems directly associated with governance, commerce, and war. If the masses can be triggered simultaneously, then who is setting the trigger conditions? Why are certain narratives able to diffuse rapidly while others are ignored? Why do the same facts, through different modes of presentation, trigger diametrically opposed social reactions? Communication Studies gradually took shape in the process of responding to these questions. Its original intention was not "to let people understand each other better," but to understand and manage an entirely new social risk.
In this sense, the historical starting point of Communication Studies lies not in "how long communication has existed," but in society for the first time clearly realizing: public opinion can be systematically shaped, and if this power is not understood, it might in turn threaten order itself. From this moment on, communication was no longer just a subordinate phenomenon of culture, but became a force that needs to be researched, modeled, and controlled. Communication Studies was thus born, and from the very beginning, it was closely linked with power and fear. This also explains why the starting point of Communication Studies is not romantic. It did not originate from "humanity finally wanting to understand each other," but originated from the deep anxiety of the state, capital, and ruling blocs toward the out-of-control masses. Strikes, riots, revolutions, and financial panics were repeatedly discovered to be highly correlated with information diffusion. Information was no longer just reflecting reality, but began to participate in manufacturing reality in a structural sense.
Edward Bernays and the Crystallization of Opinion
In this context, the appearance of Edward Bernays was no accident. Bernays was not a simple theorist, but a person who directly entered the sites of national and corporate opinion operations. He once participated in the U.S. government’s opinion mobilization system during World War I, and later, as a public relations consultant, participated in shaping the cognitive environment for external political events in the United States, including creating public opinion legitimacy for specific political actions regarding the Guatemala issue. His significance lay not in "one man overthrew a regime," but in demonstrating how the shaping of opinion became a part of modern power operations. His publication of Crystallizing Public Opinion in 1923 possesses clear significance as a historical watershed. This book was almost the first time public relations, as a profession, provided a systematic theoretical explanation for itself. In it, Bernays explicitly denied that public opinion is generated naturally, but proposed that public opinion needs to be "crystallized." The so-called crystallization is not fabricating facts, but organizing, focusing, and orienting dispersed, vague, and emotional social attitudes. In this book, he still maintained a technician-like restrained posture, describing public relations as professional labor that coordinates social relations, striving to maintain its neutrality and public interest orientation. This was also his mildest work.
Propaganda and the Engineering of Consent
Propaganda, published in 1928, significantly changed the tone. If the previous book was still explaining "what we are doing," this one directly admitted "this is the reality of democratic society." In the book, Bernays openly pointed out that modern democracy cannot exist without propaganda; the true problem lies not in whether there is manipulation, but in who is to manipulate and how to manipulate. He rationalized guidance and manipulation as necessary conditions for the operation of democracy. The danger of this book lies not in its moral stance, but in its high degree of frankness: communication here is explicitly understood as a technology of power, and is no longer an art of communication. The Engineering of Consent, published after the war, further pushed in this direction. At this stage, Bernays' focus was no longer on justifying legitimacy, but on systematic operation. The concept of the "Engineering of Consent" itself clearly indicates that public attitudes are regarded as objects that can be designed, tested, adjusted, and maintained long-term. He analogized the shaping of public opinion to engineering projects, emphasizing process, expert collaboration, and long-term planning. This marked communication’s upgrade from a strategic behavior to an institutional capability, and foreshadowed the comprehensive appearance of later political consultant systems, campaign machines, and corporate brand systems.
Walter Lippmann and the Pseudo-Environment
If Bernays represents execution and operation, then Walter Lippmann represents cold and sharp diagnosis. Lippmann was not a technical communication scholar, but a person standing at the front lines of news and political practice, observing how modern society loses stability within information. He was born in 1889, grew up in the era of rapid American industrialization and highly concentrated media, received systematic philosophical and political training, and long served as a political commentator for mainstream media. This point is crucial: he did not abstractly imagine the "public" in a study, but in real news production, political maneuvering, and war narratives, repeatedly saw how the public was led, mobilized, and even misled by information. World War I possessed decisive significance for him. During the war, the U.S. government mobilized public opinion on a large scale, manufacturing enemy-vs-us narratives, emotional consensus, and moral legitimacy. Lippmann saw with his own eyes how a democratic society that considered itself rational was led by a unified narrative in an extremely short time. This experience did not make him excited, but made him highly vigilant. From this, he realized that the complexity of modern society had already far exceeded the ability of ordinary individuals to understand the world through direct experience.
In Public Opinion, published in 1922, he proposed the "Pseudo-Environment" concept that has been repeatedly cited since. This concept was not accusing the media of lying, nor was it belittling public intelligence, but was pointing out a structural fact: reality itself has already become too large to be directly experienced by individuals. Politics, economy, war, finance, and international relations mostly happen outside the radius of individual life, but people still must make judgments on these affairs. Under these conditions, people can only live through a "mediated reality." News headlines, pictures, statistical figures, story frames, and moral labels are not reality itself, but the versions of reality after being compressed, trimmed, and formatted. People do not act according to the world itself, but act according to the imagination of the world, and this imagination is continuously supplied by the media and constantly reinforced. Lippmann called this psychological layer the "Pseudo-Environment," emphasizing that it is not equivalent to reality, yet in practice, it replaces the status of reality.
Stereotypes and the Logic of Power
The key lies in the fact that this process does not rely on conspiracy. Even if all journalists were absolutely honest, the pseudo-environment would still be unavoidable, because the form of news itself can only present fragments of reality, while public affairs require systematic understanding. This is a type of cognitive compression, rather than simple distortion. On this basis, Lippmann gave a cold but important judgment: stereotypes are not moral defects, but cognitive tools. Humans in a complex world must first categorize before they can quickly orient their stance. The problem lies not in why humans use stereotypes, but in who provides these stereotypes, how they are reinforced, and what kind of power structure they serve. Therefore, the public is not stupid, but is rationally adapting to a world that cannot be completely understood. The true risk to democracy lies not in the public lacking rationality, but in the institution pretending the public is making judgments about the "real world." On this point, the divergence between Bernays and Lippmann becomes clear. Lippmann cared about what this structural rupture meant, while Bernays cared about how power should operate under the prerequisite that the rupture is uneradicable. Bernays did not misread Lippmann, but translated his diagnosis into operational logic. In this sense, Bernays is Lippmann’s most faithful and most dangerous successor. Lippmann’s contribution lies not in providing solutions, but in continuously deconstructing the illusions of modern society: public rationality, media transparency, democracy operating naturally, and international order based on morality. Once these illusions are punctured, people will feel uneasy, but the institution may become more honest.
2) Communication in the Age of TikTok
The Evolution of Communication Models
If we pull Communication Studies from after World War I all the way to today, its evolution is not a history of knowledge following linear progress, but a process of being forcibly rerouted time and again by power needs, technological conditions, and media structures. Every time a new form of communication appears, it forces Communication Studies to rewrite what it is actually researching, whom it serves, and what kind of object it defaultingly treats humans as. From the end of WWI to WWII and the early Cold War, Communication Studies gradually formed a highly "engineered" research orientation within the American context. This stage did not possess a unified theory at the start, but under the push of needs for propaganda practice, public opinion management, and attitude research, it gradually focused on a core problem: whether information can be effectively deployed and produce predictable effects in a statistical sense. Propaganda analysis, attitude change research, and effect measurement constituted the most important research centers of gravity during this period. The masses were often assumed to be objects that could be stimulated, influenced, and regulated; the media was understood as a relatively neutral transmission pipe; and communication was deconstructed into separable, manipulable input and output stages. This mode of understanding, during WWII and the early Cold War, was highly compatible with national propaganda, psychological warfare, and domestic governance needs, and was thus continuously reinforced.
Harold Lasswell and the Focus on Effect
Within this research orientation, Harold Lasswell’s contribution lay not in proposing a specific theoretical model, but in the fact that he set a basic problem framework for communication research. The analysis formula "Who, through what channel, to whom, said what, and produced what effect" proposed in 1948 explicitly deconstructed communication into a process with "effect" as the endpoint. In this framework, communication is first a relationship of action, rather than a process of interaction; it cares not whether meaning is commonly understood, but whether influence happens and whether it is effective. In this setting, the audience only appears with the identity of the "one being acted upon," and its significance of existence is manifested in whether a measurable change ultimately occurs. Understanding is not the core variable; change is. Once communication is modeled this way, dialogue has already been marginalized at the research level. Because dialogue implies that both sides have the possibility of being changed in the interaction, while in this effect-oriented framework, change is defaultingly allowed to happen only unidirectionally.
The deeper problem lies in the fact that this model implicitly assumes meaning has already been completed before communication begins. Meaning is treated as an object that can be encoded, transmitted, and delivered, rather than a process that needs to be negotiated in interaction, depends on context, and may undergo deviation. As long as the sending end chooses the right symbols, channels, and rhythm, and triggers the expected reaction in the audience, the communication is regarded as a success. Negotiation, ambiguity, and uncertainty are not the research focus, but are variables that need to be compressed and managed. Therefore, the concept of "effect" itself carries strong normativity. Communication is understood as a type of intervention behavior with a predetermined goal, rather than an open public process. Under this logic, as long as the goal is regarded as legitimate, whether the audience truly understands is not important. This line of thought has extremely high operability in political propaganda, advertising, and psychological warfare, but it is not suitable for explaining any social process that requires public discussion, negotiation of meaning, and collective judgment.
The Medium is the Message
Entering the mid-20th century, this assumption encountered systematic challenge for the first time. Election research since the 1940s gradually showed that the masses are not completely passive, and information effects do not always happen directly. Interpersonal networks, opinion leaders, and social structures played significant mediating and buffering roles in the communication process. The research orientation subsequently summarized as "Limited Effects" was not denying the influence of communication, but pointing out that influence is not linear, instantaneous, and omnipotent. Communication Studies appeared more "mild" and more socialized on the surface, but its underlying goal did not undergo a fundamental transformation: it was not giving up on influence, but achieving influence through more indirect and structural modes. Communication shifted from "acting directly upon you" to "acting upon you through the people and environment you trust."
Almost during the same period, the medium's ontology began to enter the field of vision of Communication Studies. Marshall McLuhan proposed "The Medium is the Message" in the 1960s, constituting a fundamental challenge to existing communication assumptions. What he focused on was not specific content, but how the form of the medium itself acts as an environment, reshaping the human perceptual structure, temporal experience, and mode of social organization. The medium was no longer regarded as a neutral pipe, but was understood as a force that actively shapes social relations. Before McLuhan, Communication Studies often defaultingly assumed: the same content transmitted through different media is essentially the same information, only with different efficiency and coverage. McLuhan denied this prerequisite. He believed that different media are not "different packaging for the same information," but introduce entirely different perceptual structures. What truly, long-term, deeply, and irreversibly changes society is not what you said in the medium, but how the medium itself changes the way humans understand the world and relate to each other.
In this sense, "The Medium is the Message" is not denying the importance of content, but pointing out: the most critical influence brought by the medium happens before the content is understood. By changing the distribution of attention, the sense of rhythm, the mode of participation, and the sensory proportions, the medium in advance shapes how a person can understand and to what extent they can understand. By the time a person begins to judge "whether to believe" certain content, the perceptual structure has often already been set. This perspective makes McLuhan’s judgment appear particularly forward-looking in today's platform environment. When the medium is no longer just a tool, but becomes an environment that humans cannot escape, the power of communication no longer primarily manifests in specific persuasive content, but manifests in the medium's form itself and its ability to rewrite lifestyles.
Platform Power and Rhythmic Control
Entering the internet and platform era, Communication Studies has not completely abandoned existing theories, but its core assumptions have begun to clearly fail. Communication is no longer linear, but is networked, recursive, and highly feedback-driven. Theories such as agenda-setting, framing, and the spiral of silence did not just appear in this period, but they must be re-understood under new conditions: in an environment where platforms control visibility, ranking, and distribution, who possesses communication power is no longer a simple content problem. In the communication environment represented by short-video platforms, the research focus has further shifted. What communication faces is no longer just "how information persuades people," but "how attention is continuously shaped." Content is not actively chosen, but is systematically pushed; the subject of communication becomes blurred, and algorithms assume the role of distribution and screening, yet lack visibility and accountability. The unit of communication has shifted from "information" to "behavioral feedback": dwell time, re-watching, and interaction are more important than understanding itself.
In this structure, the communication goal shifts from attitude change to habit formation. The platform does not need you to believe in a certain stable explanatory framework; it only needs you to form a predictable path of usage. Power no longer primarily achieves itself through discourse consistency, but through rhythmic control—controlling when you watch, how long you watch, and in what emotional state you watch. From this, the main battlefield of communication slides from public reasoning toward attention management, and from meaning production toward time occupation. Different and even conflicting narratives can coexist in the same information stream, as long as they can all grab attention. Ideological consistency is no longer a necessary condition; occupying cognitive bandwidth itself is sufficient to maintain the operation of the system.
When communication operates through rhythm rather than meaning, public discussion then faces a structural dilemma. People are not conquered by a certain viewpoint, but are long-term reshaped by a mode of information intake. Viewpoints can be frequently replaced, but once attention patterns, patience thresholds, and participation rhythms are fixed, the range of what can be understood and seriously thought about will subsequently contract. Ultimately, occupying time will transform into shaping the boundaries of what is thinkable, and communication power subsequently completes the transfer from "persuading people what to believe" to "deciding how people exist within the media environment."
3) The Filtering System of TikTok
Content as a Behavioral Probe
Continuing with deeper analysis, one can understand TikTok as a real-time behavioral screening system, rather than just a "content platform." The key here lies not in how the platform evaluates the content itself, but in how it judges whether the content can stably trigger a certain type of user behavior. In this framework, video is closer to a probe. What it is used to test is: within a specific user structure, whether a predictable, repeatable behavioral reaction will appear. What the system cares about is not what viewpoint you expressed, but whether these expressions can statistically form a stable behavioral signal.
The Logic of Pressure Testing and Distribution
From the externally observable distribution mode, a new video on TikTok will usually first enter a user sample that is small in scale, but not random. This sample is more likely to be composed of users who have shown explicit reactions to similar content or stimulus types. The focus of this stage is not "recommendation," but is more like a pressure test: the system, through very early behavioral indicators, judges whether it is worth continuing to invest more attention resources. In this stage, what the system cares about are often the most basic, cold-start signals, for example, whether the user pauses at the moment they swipe to the video, or whether a brief continuation of watching appears. This type of signal is not equivalent to complete watching or interaction, but they decide whether the video possesses the minimum qualification to enter the next round of testing. If these early indicators are significantly lower than the baseline level, more complex data subsequently will no longer be focused on for examination.
From the distribution results, this process is closer to layer-by-layer clearing (release), rather than a comprehensive score. The system does not need to judge whether a video is "overall good," but continuously evaluates whether it satisfies the conditions to enter the next stage of testing. Once key behavioral signals are insufficient, distribution often will stop rapidly, rather than gradually weakening. This can explain why some videos, under the condition that early data is acceptable, will still suddenly lose diffusion. One direct consequence of this mechanism is that the weight of the early sample is significantly higher than later performance. The fate of a video depends more on the reaction of the initial small part of users, rather than "proving itself" slowly through time. Once the system judges that the stimulus is not statistically valuable for expansion, even if potential quality exists subsequently, it is very difficult to obtain new display opportunities.
Differentiation and Predictable Stimuli
After passing the preliminary test, the distribution logic will not simply expand the scale, but is more likely to turn toward differentiated testing. That is to say, the video will be deployed to user groups with different structures and higher thresholds, to observe whether the reaction still holds. What the system cares about at this time is not the quantity of reactions, but the transferability of the reaction across different user structures. If a certain stimulus is only effective among low-threshold populations, but rapidly fails among more picky or higher-behavior-cost populations, the system often will identify it as a locally established mode, thereby limiting its diffusion range. This type of content will not be cleared, but usually will be stably restricted within a narrow distribution band. Conversely, when content, under the condition that the scale is not large, can stably trigger high-cost behaviors, such as comments or shares with clear stances, the system often will treat it as a signal with explicit positioning value. This does not mean the content is "better," but means it has formed a stronger binding relationship with a certain type of behavior. For the platform, this type of stimulus structure that can be reused and predicted often possesses more engineering value than "overall acceptable" content.
At this level, the core the system cares about is not the content semantics itself, but whether the content stably triggers a certain state change. The platform does not need to completely understand the viewpoint logic; it only needs to identify: which type of user, under what kind of stimulus, transfers from a pre-behavioral state to another state. Emotion here is not the goal, but an intermediate variable that can be recorded and utilized. Once this "content—state transition—behavioral reaction" mapping relationship tends toward stability, the video is no longer treated as an object to be evaluated, but is closer to a tool with a known use. Thereafter, it will be continuously deployed to the population most likely to complete that behavior, and will rarely again be used to explore entirely different user structures. This phenomenon is not naturally formed echo chambers, but is closer to the result of precision deployment.
Account Stability and Rhythmic Logic
On an even longer time scale, the focus of the system's evaluation will gradually shift from a single piece of content toward the account as a whole. The account will be regarded as a relatively stable source of stimuli, while a single video is a sample of that source. The problem the system cares about is no longer "how is this video," but "what kind of behavioral reaction does this account usually manufacture." When the account statistically exhibits a stable output mode, distribution then will rely more on prediction rather than re-testing. Content that conforms to existing modes often starts faster and has higher fault tolerance; while content that clearly deviates from existing stimulus types may obtain fewer testing opportunities because the uncertainty is too high. This is also why account transformation in practice often has extremely high costs; it is not that the audience cannot accept it, but that the system needs to pay additional learning costs.
Under this logic, the platform does not naturally prefer so-called "positive energy" or "negative energy," but prefers content whose reaction curve is highly predictable. Accounts with clear stances, stable structures, and consistent output modes are more easily trusted by the system; while expressions that are vague, wavering, and frequently testing different stimulus types are more difficult to be modeled and amplified. The so-called reaction curve can be understood as a time sequence that can be recorded: at which second the user pauses, at what time they generate emotion, and at what time they take action. The system does not judge the value orientation of this curve, but judges whether it is sufficiently stable, and whether it can be accurately predicted the next time. Stability, rather than morality or the viewpoint itself, is the most core foundation of trust at the engineering level. From this perspective, TikTok is closer to a screener, rather than a stage. Creators are not unidirectionally "expressing," but are continuously undergoing the system's inspection of their stimulus stability. Content that can continuously diffuse is often not completely consciously catering, but has unconsciously formed a high degree of compatibility with this set of screening mechanisms.