What an interesting and strange article. The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects. It's not clear what positive suggestions this article offers except to always disregard regulation and build your own thing from scratch, which is ... certainly consistent with the Works In Progress imprint.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
There is something about Club of Rome to systems thinking that is similar to the Dijkstra's observation about Basic and programming.
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
I didn't feel like he was refuting the whole discipline. Rather, he seems to admire Forrester and the whole discipline. The argument just seems to be, even with great systems thinking, you can't build a complex system from scratch and that existing complex systems are often hard to fix.
Couldn't one interpret "magical systems thinking" as a fallacy that people may commit when applying systems thinking? More broadly, I find some of the comments here rather harsh, also considering that many observations in the article are intuitively true for anyone whose ever been exposed to bureaucracy on the meta-level.
The article seems to think that systems thinking only applies at a certain lower scale. Even bringing up the bullwhip effect, and talking about it in certain kinds of systems is itself systems thinking, just not at the subcomponent level which doesn't show it. Systems thinking is about interactions and context.
> The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects.
All valid criticisms, but somehow it sounds exactly like something a member of inept bureaucracy would say.
Yeah, what they are attempting mg to do in the span of one short essay is equivalent to trying to discredit an entire field of inquiry. Even if you don't think the field is worth anything, it should be obvious that it will take a lot of research and significant argumentation to accomplish that goal, this essay is lacking in both departments.
Applying the idea of "starting with a simple system that works"̀ to Factorio, Shapez (and now Shapez 2) is like Factorio for abstract geometric shapes and colors.
It's got all the essential elements of Factorio that make it so interesting and compelling, which apply to so many other fields from VLSI design to networking to cloud computing.
But you mine shapes and colors and combine them into progressively more complex patterns!
> assume the system will react to resist intervention
Systems don't do that. Only constituents who fear particular consequences do.
Systems also don't care about levels of complexity. Especially since it's insanely hard to actually break systems that are held together by only the "what the fuck is going on, let's look into that" kind. Hours, days, weeks, later, things run again. BILLIONS lost. Oh, we wish ...
At the end of the day, the term Systems Thinking is overloaded by all the parts that have been invented by so called economists and "the financial industry", which makes me chuckle every time now that it's 2025 and oil rich countries have been in development for decades, the advertisement industry is factory farming content creators and economists and multi-billionaires want more tikktoccc and instagwam to get into the backs of teen heads.
If you are a SWE, systems architect or anything in that sphere, please, ... act like you care about the people you are building for ... take some time off if you can and take care of must be taken care of, ... it's just systems, after all.
This insight - that modeling human systems is hard because humans also respond to models of their world and then change it - is not all that new, it's called reflexivity [1] and has been around for about the same time as systems thinking.
Also a major contributing factor to second (and third) order effects. Humans, groups, even societies all respond to changes - and if you cannot anticipate what those changes might be, then you are out of luck.
(You will never get all them right. You will never even be able to list what their entirety will be. But you have to be able to predict the order of magnitude of a few of them.)
This is actually a critique of massive bureaucratic systems, not systems thinking as a practice. Gall's work is presented as an argument against systems thinking, while it's a contribution to the field. Popular books on systems thinking all acknowledge the limitations, pitfalls, and strategies for putting theory into practice. That large bureaucracies often fails to is, in my view, an unrelated subject.
I used to suffer from analysis paralysis when designing even basic things, be it software engineering or my next week's schedule. After all, paraphrasing, "Everyone has a plan until they get punched in the face".
I found it much better to take the first step and progress from there, even when the full solution is not known. Maybe it's a testament to the limits of my own context window. Having said, I'm not advocating for abandoning architecture or engineering principles. I like the idea of "Growing software" [0]. It's perhaps a more holistic metaphor.
In terms of short circuiting large bureaucracies, I found "Fighter Mafia" [1] to be an interesting example of this. A group of military officials/contractors managed to influence aircraft design, somewhat outside of the "official" channels. The outcome was better than if it went through normal channels.
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
"Metacybernetics" is a concept with a small handful of Google hits, some of which appear to be obscure research papers and some appear to be metaphysical crackpottery on blogs.
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
Cybernetics was the birthing place of neural networks. Hardly niche.
I don't think commenters should be expected to provide full overviews of topics just to inform others. Parent gave plenty of pointers beyond metacybernetics, all of which are certainly discoverable. If you are curious, read about it. It's not the responsibility of random strangers to educate you.
It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.
That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.
I've seen this too. In particular there seems to be a huge dividing line between systems research stemming from the physical-mathematical heritage of formal dynamical systems, and the other line mostly stemming from everything Wiener did with cybernetics (and some others who were contemporaneous with Wiener). Both sides can be profitably informed by the other in various ways.
The article is right to be critical of "systems thinking", but only because people who advocate "systems thinking" usually don't have a concrete definition of "system" or any unique method of thinking.
The complicated systems that are alluded to here, are usually best modeled as optimizers or control systems. Both have clear definitions and vast mathematical corpi.
In a house with heating, it's difficult to cool the whole house or even a single room by leaving the freezer door open. Why? because the system is programmed to be a certain temperate and has a mechanism continuously driving it to that temperature. It's difficult to knock over one of the Boston Dynamics robots for the same reason.
If the government declares that rents cannot be higher than a certain price per square foot, then mysteriously only the renters with impeccable financials and renting history will be able to get houses. And some houses will stop being available for rent. Why? Because the market is optimizing for value creation. Honest, considerate renters devalue the property less during their stay, and some properties are worth more than the maximum rental price when used for another purpose. If you limit the price, agents will fallback to other mechanisms to determine the most valuable course of action. In this example that is minimizing missed payments, evictions, and property damage.
Unless you affect the controller or optimizer hidden in each system, you can't manipulate the system effectively. Usually you aren't able to do this, and so the system is difficult to control. It's easier to rip out a thermostat than to disable the desire of millions of humans to create value. If you can't model the system in a rigorous way, and then use math to predict and explain it, then you won't be able to manipulate it. Saying that you are using "systems thinking" won't change that.
The point is that just dumping an arbitrary amount of cold into the house is unlikely to change the temperature because the thermostat has access to more heat, and has a different goal.
That example was meant to illustrate why simple one off actions have a diminished or imperceptible effect on the system.
I'm glad that the author mentions Systemantics because their line of thinking seems heavily influenced by that book. I noted all the main ideas here: https://biodigitaljazz.net/systemantics.html
The citation to the beer game is a pretty fun one. About 15 years ago, John Sterman (a Forrester disciple) held a beer game "world championship" at a system dynamics conference, and my mother and I brought what we think is the optimal strategy and completely dominated the competition. Ironically, if you apply "systems thinking" in the right way, the beer game is a relatively simple thing to play extremely close to optimally. You can recognize that only one player can make choices that matter for the final outcome of the game, and then eliminate most of the claimed dynamics. The issues with systems thinking mostly show up with people being dumb panicky apes and with the pilot/modeler not understanding the system. The math works if you let the math work for you.
> largely outside the typical congressional appropriation oversight channels
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
Process is useful for raising the lowest deliveries quality, for making former-unknowns into knowns, and for preventing misaligned behavior when culture alone becomes insufficient.
If you have need for speed, a team that knows the space, and crucially a leader who can be trusted to depart from the usual process when that tradeoff better meets business needs, it can work really well. But also comes with increased risk.
I think it's just that complex systems need maintenance, and if the maintenance isn't done (and it never is), the system eventually rusts to the point that it takes far more effort to go through it then start fresh.
General principle 1: to make a meeting matter, make a decision. (A meeting at its most basic is kinda like a locking primitive, gets independent threads to synchronize for a short time. Think through why you need that synchrony.)
General principle 2: create focus on the critical path. (If each ticket you work on is slightly different from other tickets and no cookie-cutter solutions exist, then there is some chain of slow, annoying, winding steps, and the rest of the dependency graph doesn't really matter, just these big pains in the butt that often are linked in the dependency graph only by the fact that it's going to be one developer working on all of them and they can't work on them all simultaneously. It follows that you can only get interesting speed improvements if multiple developers are working on the same change. Note that daily stand up is an example of a meeting which does not make a decision—it could but in practice nobody uses it that way—but instead its function is to create pressure on the critical path. Often unhealthy pressure, someone was sprinting at 100% and now they are getting a little burned out, and daily stand up forces them to do something that they can report at standup lest they be honest and say that they're suffering.)
General principle 3: process helps create shared reality. (Consider two different ways to protect prod from the developers. In one way, everyone commits to main, some file or database table or configmap contains a list of features, and those features can be toggled in dev, uat, or prod. The process here is, whenever you change anything, you wrap it in a feature toggle, so that your change does not impact prod if that toggle is off. Versus, consider having three different branches, you can usually commit new features to dev, eventually we cut a release from Dev and push it to the UAT branch, cut a release from UAT to push to the prod branch. But these are separate branches because we might need to hotfix UAT or Prod. The process here can go in these two different directions, see, but one of them leads to a shared reality, this is the entirety of the code and all of the different ways that it can be configured in our production environment, and we need to carefully consider how we remove those feature toggles—versus the other one has three independent realities and nobody is considering all of the different ways that it can be configured or what is being removed, and periodically you get bugs because people didn't revert the revert—what, you didn't know that you needed to revert the revert, you always need to revert the revert. So process tends to be more lightweight if it generates one shared reality).
General principle 4: process needs to help you figure out, and keep highlighted, the “Useless Without.” (There are many nice-to-haves, in a given project. There are a lot of them that people will say are must-haves. The product must be secure, the product must be available at this website address, okay fine. But there is one business goal that the project serves, which, if that business goal is not accomplished, the whole project is useless. That is the Useless Without feature. So I worked on a shop floor system of kiosks for like 6 months once before I determined from talking to the stakeholders that the thing was actually Useless Without time tracking, and this is a sensitive issue because unionized pipefitters are understandably skittish around surveillance technology that could be used in dystopian ways. But we're going to address their needs by looking at the project only, trying to figure out how long each of the steps in building the project takes, but we still don't talk about how we're trying to make the shop floor run efficiently. But you understand every meeting I had before we had clarified this, was actively detrimental to my productivity on this task.)
If you think about it, all thinking is magical (in the sense that we, the universe and any of this exists at all.)
Having said that, there is no magical thinking in asserting that the government is bad at software. The government is bad at 99% of the things it does, but that 1% is keeping stuff running, despite the government trying really hard to fail at that 1%, too.
I don’t think the author is accurately characterizing Systems thinking but is closer to talking about something like Agile vs Waterfall. Which i think they’re still correct about, and there has been a sharp turn into waterfall-like thinking (though they will never call it that) in software development recently.
controversial opinion, while learnable to a degree, systems-orientated thought is fundamentally aligned with something biological, either born or developed in early years. People align more with "things" or "people". It's extremely rare someone is truly aligned with both.
Great article. I think one missing piece is that if you kick a system too far from its current “equilibrium” it can create self reinforcing loops (rather than kicking back) that lead to uncontrollable runaway.
I like this saying better: every system is perfect until people get involved. People act irrationally because they are reacting to the nonsense that pervades their reality.
Wait until "world models" fail to work along these lines. Models are always wrong. They're useful only as interpretations, they can never reproduce, reference, or mimic the events in question.
Another great example is Tansley's ecological systems model that he worked on for over many years with influence from Forrester only for the Odums to develop models, attempt to reproduce them in controlled environments and watch them fail miserably.
The cybernetic, computational, systems, world models are illusions all. AI has the same limitations simply because the infinity of tasks can never be modeled or automated.
Most of the ideas in the article can be seen, very clearly and cleverly narrated, in Curtis's best series "All Watched Over By Machines of Loving Grace" particularly episode 2.
This essay focuses on a very narrow section of systems thinking and systems theory. There's an entire field, with many different subdisciplines beyond just the Club of Rome stuff (and which influenced them directly) that, quite explicitly also deals with systems that "fight back". In fact, any serious definition of systems thinking usually has said dynamics baked into it—systems are assumed to evolve from the start.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Both of the books "Systems Thinkers" and "The Emerging Consensus in Social Systems Theory" are nice broad introductions into the historical developments, various lines of thought, and the massive space that is systems thinking. They should both give you a good initial starting point for further research.
Sounds like the Club of Rome was enamored of Isaac Asimov's Foundation series:
> "The Club of Rome asked an even more intricate question: how would social and economic forces interact in the coming decades? Where were the bottlenecks and feedback mechanisms? Could economic growth continue, or would the world enter a new phase of equilibrium or decline?"
The problem is, as systems grow more complex they often start to demonstrate sensitive dependence on conditions, eg with tiny variations in inputs to one node of the system resulting in wild swings in outputs from that node. Equally problematic, nodes in a complex system can change their connectivity to other nodes if conditions change enough (think of a breakdown in trade between nations due to wars, natural disasters, diseases etc).
The ideal systems to depend on are stable (not hypersensitive to small forcings, with predictable behavior) and have consistent structure. They can still be complicated but should fail gracefully back to simpler structures under stress, eg an emergency power supply for electricity at a hospital that normally relies on the grid.
From this perspective, our electrical grids are well-designed systems - not given to huge power fluctuations - that will nevertheless need major expansions and improvements if electricity demand keeps rising with data centers and eVs etc. However, expanding the grid isn't adding fundamental instabilities, it's just modular addition in the same pattern as the existing system.
In contrast, the USA's current financial-monetary system is not that stable, predictable, or reliable. All kinds of fundamental instabilities exist, and wild swings in behavior under pressure are expected - and since everything else relies on it, eg you can't update the electrical grid without capital input, you risk avalanching catastrophes by relying on such an unstable system.
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.
I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.
I also wonder if we are being fooled by randomness when we think we see a person or a technique that yields good predictions. Are good prophets just luck plus survivorship bias? Obviously we forget all the bad prophets. All lottery winners are lucky, therefore lucky people should play the lottery. But who is lucky? The only way to find out is to play the lottery. Anyone who wins should have played, and anyone who loses should not have played.
I like this. The author is somewhat needlessly hopeless about the prospects of changing a complex system.
Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.
The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.
This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.
If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).
But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive
ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”
There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.
>How did the leap from Raid's world editor, to SimCity with its urban design theories, happen?
>WW: First, it was just a toy for me. I was just making my editor more and more elaborate. I thought it would be cool to have the world come to life. So I started researching books on urban dynamics, and traffic, and things like that. I came across the work of Jay Forrester, who was kind of the father of system dynamics. He was actually one of the first people I found that actually simulated a city on a computer. Except in his simulation, there was no map; it was just numbers. It was like population level, number of jobs -- it was kind of a spreadsheet model.
>So I took his approach to it, and then applied a lot of the cellular automata stuff that I had learned earlier, and get these emergent dynamics that he wasn't getting in his model. I found when I was reading all these theories about urban dynamics and city behavior, that when I had a toy simulated version on the computer, it made the subject much more interesting than reading a book -- because I could go to my computer model and start experimenting.
>That just bought the whole subject to life for me and then, more and more, I started thinking, "Other people might enjoy this." But even then I never thought SimCity would have a broad appeal. I thought it might appeal to a few architects and city planner types, but not average people.
But Will's goal was to make a game that was fun to play, not to accurately simulate reality or make predictions. Intentionally inspiring magical systems thinking for entertainment, education, and storytelling!
>These reverse diagrams map and translate the rules of a complex simulation program into a form that is more easily digested, embedded, disseminated, and and discussed (Latour 1986).
>The technique is inspired by the game designer Stone Librande’s one page game design documents (Librande 2010).
>If we merge the reverse diagram with an interactive approach—e.g. Bret Victor’s Nile Visualization
(Victor 2013), such diagrams could be used generatively, to describe programs, and interactively,
to allow rich introspection and manipulation of software.
Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update):
>Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked him a question something like “which ontological urban paradigm most influenced your design of the simulator, the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?” He replied, “I just kind of optimized for game play.”
>DonHopkins on Jan 16, 2020 | parent | context | favorite | on: Reverse engineering course
Will Wright defined the "Simulator Effect" as how game players imagine a simulation is vastly more detailed, deep, rich, and complex than it actually is: a magical misunderstanding that you shouldn’t talk them out of. He designs games to run on two computers at once: the electronic one on the player’s desk, running his shallow tame simulation, and the biological one in the player’s head, running their deep wild imagination.
"Reverse Over-Engineering" is a desirable outcome of the Simulator Effect: what game players (and game developers trying to clone the game) do when they use their imagination to extrapolate how a game works, and totally overestimate how much work and modeling the simulator is actually doing, because they filled in the gaps with their imagination and preconceptions and assumptions, instead of realizing how many simplifications and shortcuts and illusions it actually used.
>There's a name for what Wright calls "the simulator effect" in the video: apophenia. There's a good GDC video on YouTube where Tynan Sylvester (the creator of RimWorld) talks about using this effect in game design.
>Apophenia (/æpoʊˈfiːniə/) is the tendency to mistakenly perceive connections and meaning between unrelated things. The term (German: Apophänie) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential, over-interpretations of actual sensory perceptions, as opposed to hallucinations.
RimWorld: Contrarian, Ridiculous, and Impossible Game Design Methods
>Tip 5: On world building. As you know by now, Will's approach to creating games is all about building a coherent and compelling player experience. His games are comprised of layered systems that engage players creatively, and lead to personalized, some times unexpected outcomes. In these types of games, players will often assume that the underlying system is smarter than it actually is. This happens because there's a strong mental model in place, guiding the game design, and enhancing the player's ability to imagine a coherent context that explains all the myriad details and dynamics happening within that game experience.
>Now let's apply this to your project: What mental model are you building, and what story are you causing to unfold between your player's ears? And how does the feature set in your game or product support that story? Once you start approaching your product design that way, you'll be set up to get your customers to buy into the microworld that you're building, and start to imagine that it's richer and more detailed than it actually is.
So what does that make you, for "feeling" and "trolling" by projecting your ignorant political ideology onto something you know nothing about, instead of "thinking" and "experimenting" and "publishing" and "collaborating"? Exactly what profession are you role playing yourself?
You clearly haven't read much in the field of systems thinking, then. Many of the practitioners and most of its pioneers are in fact actual mathematicians, biologists, or computer scientists (Wiener, von Foerster, Banathy, etc)
The entire field of process design & automatic process control. This literally is the O.G. job description of a chemical engineer. The field of grid design and balancing. Again job description of an electrical engineer.
Yes, but in all your examples it's the same: you learn the specific subject and you are implicitly learning "systems thinking"; it's not like you learn "systems thinking" first as the hard part, and then you learn electronic components as an implementation detail to become a electrical engineer.
I see your point. Indeed we do learn it bottoms up. But why do you think the opposite is impossible? It seems like a transferable skill across domains.
This is totally orthogonal to your original claim that systems thinkers are "liberal" philosophers but OK.
McCulloch and Pitts, early cyberneticians literally invented neural networks. See the wikipedia page on neural nets.
Another really simple one: Law of Requisite Variety. If that's too simple, I'd encourage you to bear in mind that Norbert Wiener, beyond his direct contributions to mathematics in the form of signal processing filters, is also responsible for the view of control as communication, which motivates much of the approach to control and stability in digital systems.
Here's one: reading the most basic Wikipedia page about a subject before you make up your mind based on your political ideology instead of any actual knowledge.
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
All valid criticisms, but somehow it sounds exactly like something a member of inept bureaucracy would say.
It's got all the essential elements of Factorio that make it so interesting and compelling, which apply to so many other fields from VLSI design to networking to cloud computing.
But you mine shapes and colors and combine them into progressively more complex patterns!
https://en.wikipedia.org/wiki/Shapez_2
Systems don't do that. Only constituents who fear particular consequences do.
Systems also don't care about levels of complexity. Especially since it's insanely hard to actually break systems that are held together by only the "what the fuck is going on, let's look into that" kind. Hours, days, weeks, later, things run again. BILLIONS lost. Oh, we wish ...
At the end of the day, the term Systems Thinking is overloaded by all the parts that have been invented by so called economists and "the financial industry", which makes me chuckle every time now that it's 2025 and oil rich countries have been in development for decades, the advertisement industry is factory farming content creators and economists and multi-billionaires want more tikktoccc and instagwam to get into the backs of teen heads.
If you are a SWE, systems architect or anything in that sphere, please, ... act like you care about the people you are building for ... take some time off if you can and take care of must be taken care of, ... it's just systems, after all.
Systems don't do that. Only constituents who fear particular consequences do. <<
For example, the human body is pretty decent at maintaining a fixed internal temperature.
Cities supposedly maintain a fairly stable transit time even as transit infrastructure improves.
These are part of a system. Ignoring these components gives you an incomplete model.
(All models are incomplete, by definition, but ignoring constituents that have a major influence greatly reduces the effectiveness of your model)
[1] https://en.wikipedia.org/wiki/Reflexivity_(social_theory)
(You will never get all them right. You will never even be able to list what their entirety will be. But you have to be able to predict the order of magnitude of a few of them.)
I found it much better to take the first step and progress from there, even when the full solution is not known. Maybe it's a testament to the limits of my own context window. Having said, I'm not advocating for abandoning architecture or engineering principles. I like the idea of "Growing software" [0]. It's perhaps a more holistic metaphor.
In terms of short circuiting large bureaucracies, I found "Fighter Mafia" [1] to be an interesting example of this. A group of military officials/contractors managed to influence aircraft design, somewhat outside of the "official" channels. The outcome was better than if it went through normal channels.
[0]: http://www.growing-object-oriented-software.com
[1]: https://en.wikipedia.org/wiki/Fighter_Mafia
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
You missed the opportunity to ask a simple question - what is metacybernetics? - and decided everything on that list was just as niche.
I don't think commenters should be expected to provide full overviews of topics just to inform others. Parent gave plenty of pointers beyond metacybernetics, all of which are certainly discoverable. If you are curious, read about it. It's not the responsibility of random strangers to educate you.
would https://en.m.wikipedia.org/wiki/Warren_Sturgis_McCulloch be what you mean?
and if not,can you give the right pointer?
https://en.wikipedia.org/wiki/American_Society_for_Cyberneti...
https://en.wikipedia.org/wiki/Complexity_theory
The complicated systems that are alluded to here, are usually best modeled as optimizers or control systems. Both have clear definitions and vast mathematical corpi.
In a house with heating, it's difficult to cool the whole house or even a single room by leaving the freezer door open. Why? because the system is programmed to be a certain temperate and has a mechanism continuously driving it to that temperature. It's difficult to knock over one of the Boston Dynamics robots for the same reason.
If the government declares that rents cannot be higher than a certain price per square foot, then mysteriously only the renters with impeccable financials and renting history will be able to get houses. And some houses will stop being available for rent. Why? Because the market is optimizing for value creation. Honest, considerate renters devalue the property less during their stay, and some properties are worth more than the maximum rental price when used for another purpose. If you limit the price, agents will fallback to other mechanisms to determine the most valuable course of action. In this example that is minimizing missed payments, evictions, and property damage.
Unless you affect the controller or optimizer hidden in each system, you can't manipulate the system effectively. Usually you aren't able to do this, and so the system is difficult to control. It's easier to rip out a thermostat than to disable the desire of millions of humans to create value. If you can't model the system in a rigorous way, and then use math to predict and explain it, then you won't be able to manipulate it. Saying that you are using "systems thinking" won't change that.
I think cooling with an open freezer is impossible in general? Or is that your point and I don't understand the argument?
That example was meant to illustrate why simple one off actions have a diminished or imperceptible effect on the system.
[0] https://cdn.factorio.com/assets/blog-sync/fff-420-line-art.p...
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
If you have need for speed, a team that knows the space, and crucially a leader who can be trusted to depart from the usual process when that tradeoff better meets business needs, it can work really well. But also comes with increased risk.
General principle 2: create focus on the critical path. (If each ticket you work on is slightly different from other tickets and no cookie-cutter solutions exist, then there is some chain of slow, annoying, winding steps, and the rest of the dependency graph doesn't really matter, just these big pains in the butt that often are linked in the dependency graph only by the fact that it's going to be one developer working on all of them and they can't work on them all simultaneously. It follows that you can only get interesting speed improvements if multiple developers are working on the same change. Note that daily stand up is an example of a meeting which does not make a decision—it could but in practice nobody uses it that way—but instead its function is to create pressure on the critical path. Often unhealthy pressure, someone was sprinting at 100% and now they are getting a little burned out, and daily stand up forces them to do something that they can report at standup lest they be honest and say that they're suffering.)
General principle 3: process helps create shared reality. (Consider two different ways to protect prod from the developers. In one way, everyone commits to main, some file or database table or configmap contains a list of features, and those features can be toggled in dev, uat, or prod. The process here is, whenever you change anything, you wrap it in a feature toggle, so that your change does not impact prod if that toggle is off. Versus, consider having three different branches, you can usually commit new features to dev, eventually we cut a release from Dev and push it to the UAT branch, cut a release from UAT to push to the prod branch. But these are separate branches because we might need to hotfix UAT or Prod. The process here can go in these two different directions, see, but one of them leads to a shared reality, this is the entirety of the code and all of the different ways that it can be configured in our production environment, and we need to carefully consider how we remove those feature toggles—versus the other one has three independent realities and nobody is considering all of the different ways that it can be configured or what is being removed, and periodically you get bugs because people didn't revert the revert—what, you didn't know that you needed to revert the revert, you always need to revert the revert. So process tends to be more lightweight if it generates one shared reality).
General principle 4: process needs to help you figure out, and keep highlighted, the “Useless Without.” (There are many nice-to-haves, in a given project. There are a lot of them that people will say are must-haves. The product must be secure, the product must be available at this website address, okay fine. But there is one business goal that the project serves, which, if that business goal is not accomplished, the whole project is useless. That is the Useless Without feature. So I worked on a shop floor system of kiosks for like 6 months once before I determined from talking to the stakeholders that the thing was actually Useless Without time tracking, and this is a sensitive issue because unionized pipefitters are understandably skittish around surveillance technology that could be used in dystopian ways. But we're going to address their needs by looking at the project only, trying to figure out how long each of the steps in building the project takes, but we still don't talk about how we're trying to make the shop floor run efficiently. But you understand every meeting I had before we had clarified this, was actively detrimental to my productivity on this task.)
Having said that, there is no magical thinking in asserting that the government is bad at software. The government is bad at 99% of the things it does, but that 1% is keeping stuff running, despite the government trying really hard to fail at that 1%, too.
Stupid comment, I know.
Another great example is Tansley's ecological systems model that he worked on for over many years with influence from Forrester only for the Odums to develop models, attempt to reproduce them in controlled environments and watch them fail miserably.
The cybernetic, computational, systems, world models are illusions all. AI has the same limitations simply because the infinity of tasks can never be modeled or automated.
Most of the ideas in the article can be seen, very clearly and cleverly narrated, in Curtis's best series "All Watched Over By Machines of Loving Grace" particularly episode 2.
https://insightmaker.com/insight/2pCL5ePy8wWgr4SN8BQ4DD/The-...
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Then, “Meltdown” and finally “The Fifth Discipline”
> "The Club of Rome asked an even more intricate question: how would social and economic forces interact in the coming decades? Where were the bottlenecks and feedback mechanisms? Could economic growth continue, or would the world enter a new phase of equilibrium or decline?"
The problem is, as systems grow more complex they often start to demonstrate sensitive dependence on conditions, eg with tiny variations in inputs to one node of the system resulting in wild swings in outputs from that node. Equally problematic, nodes in a complex system can change their connectivity to other nodes if conditions change enough (think of a breakdown in trade between nations due to wars, natural disasters, diseases etc).
The ideal systems to depend on are stable (not hypersensitive to small forcings, with predictable behavior) and have consistent structure. They can still be complicated but should fail gracefully back to simpler structures under stress, eg an emergency power supply for electricity at a hospital that normally relies on the grid.
From this perspective, our electrical grids are well-designed systems - not given to huge power fluctuations - that will nevertheless need major expansions and improvements if electricity demand keeps rising with data centers and eVs etc. However, expanding the grid isn't adding fundamental instabilities, it's just modular addition in the same pattern as the existing system.
In contrast, the USA's current financial-monetary system is not that stable, predictable, or reliable. All kinds of fundamental instabilities exist, and wild swings in behavior under pressure are expected - and since everything else relies on it, eg you can't update the electrical grid without capital input, you risk avalanching catastrophes by relying on such an unstable system.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.
I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.
I also wonder if we are being fooled by randomness when we think we see a person or a technique that yields good predictions. Are good prophets just luck plus survivorship bias? Obviously we forget all the bad prophets. All lottery winners are lucky, therefore lucky people should play the lottery. But who is lucky? The only way to find out is to play the lottery. Anyone who wins should have played, and anyone who loses should not have played.
Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.
The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.
This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.
If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).
But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”
There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.
https://www.gamedeveloper.com/business/the-replay-interviews...
>How did the leap from Raid's world editor, to SimCity with its urban design theories, happen?
>WW: First, it was just a toy for me. I was just making my editor more and more elaborate. I thought it would be cool to have the world come to life. So I started researching books on urban dynamics, and traffic, and things like that. I came across the work of Jay Forrester, who was kind of the father of system dynamics. He was actually one of the first people I found that actually simulated a city on a computer. Except in his simulation, there was no map; it was just numbers. It was like population level, number of jobs -- it was kind of a spreadsheet model.
>So I took his approach to it, and then applied a lot of the cellular automata stuff that I had learned earlier, and get these emergent dynamics that he wasn't getting in his model. I found when I was reading all these theories about urban dynamics and city behavior, that when I had a toy simulated version on the computer, it made the subject much more interesting than reading a book -- because I could go to my computer model and start experimenting.
>That just bought the whole subject to life for me and then, more and more, I started thinking, "Other people might enjoy this." But even then I never thought SimCity would have a broad appeal. I thought it might appeal to a few architects and city planner types, but not average people.
But Will's goal was to make a game that was fun to play, not to accurately simulate reality or make predictions. Intentionally inspiring magical systems thinking for entertainment, education, and storytelling!
Chaim Gingold's SimCity Reverse Diagrams:
https://smalltalkzoo.computerhistory.org/users/Dan/uploads/S...
>These reverse diagrams map and translate the rules of a complex simulation program into a form that is more easily digested, embedded, disseminated, and and discussed (Latour 1986).
>The technique is inspired by the game designer Stone Librande’s one page game design documents (Librande 2010).
>If we merge the reverse diagram with an interactive approach—e.g. Bret Victor’s Nile Visualization (Victor 2013), such diagrams could be used generatively, to describe programs, and interactively, to allow rich introspection and manipulation of software.
Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update):
https://news.ycombinator.com/item?id=34573406
https://donhopkins.medium.com/designing-user-interfaces-to-s...
>Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked him a question something like “which ontological urban paradigm most influenced your design of the simulator, the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?” He replied, “I just kind of optimized for game play.”
https://news.ycombinator.com/item?id=22062590
>DonHopkins on Jan 16, 2020 | parent | context | favorite | on: Reverse engineering course
Will Wright defined the "Simulator Effect" as how game players imagine a simulation is vastly more detailed, deep, rich, and complex than it actually is: a magical misunderstanding that you shouldn’t talk them out of. He designs games to run on two computers at once: the electronic one on the player’s desk, running his shallow tame simulation, and the biological one in the player’s head, running their deep wild imagination. "Reverse Over-Engineering" is a desirable outcome of the Simulator Effect: what game players (and game developers trying to clone the game) do when they use their imagination to extrapolate how a game works, and totally overestimate how much work and modeling the simulator is actually doing, because they filled in the gaps with their imagination and preconceptions and assumptions, instead of realizing how many simplifications and shortcuts and illusions it actually used.
https://www.masterclass.com/classes/will-wright-teaches-game...
>There's a name for what Wright calls "the simulator effect" in the video: apophenia. There's a good GDC video on YouTube where Tynan Sylvester (the creator of RimWorld) talks about using this effect in game design.
https://en.wikipedia.org/wiki/Apophenia
>Apophenia (/æpoʊˈfiːniə/) is the tendency to mistakenly perceive connections and meaning between unrelated things. The term (German: Apophänie) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential, over-interpretations of actual sensory perceptions, as opposed to hallucinations.
RimWorld: Contrarian, Ridiculous, and Impossible Game Design Methods
https://www.youtube.com/watch?v=VdqhHKjepiE
5 game design tips from Sims creator Will Wright
https://www.youtube.com/watch?v=scS3f_YSYO0
>Tip 5: On world building. As you know by now, Will's approach to creating games is all about building a coherent and compelling player experience. His games are comprised of layered systems that engage players creatively, and lead to personalized, some times unexpected outcomes. In these types of games, players will often assume that the underlying system is smarter than it actually is. This happens because there's a strong mental model in place, guiding the game design, and enhancing the player's ability to imagine a coherent context that explains all the myriad details and dynamics happening within that game experience.
>Now let's apply this to your project: What mental model are you building, and what story are you causing to unfold between your player's ears? And how does the feature set in your game or product support that story? Once you start approaching your product design that way, you'll be set up to get your customers to buy into the microworld that you're building, and start to imagine that it's richer and more detailed than it actually is.
McCulloch and Pitts, early cyberneticians literally invented neural networks. See the wikipedia page on neural nets.
Another really simple one: Law of Requisite Variety. If that's too simple, I'd encourage you to bear in mind that Norbert Wiener, beyond his direct contributions to mathematics in the form of signal processing filters, is also responsible for the view of control as communication, which motivates much of the approach to control and stability in digital systems.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.