It's an ad. They have no demo, you have to schedule a demo with them.
Then they mention examples (like how simulated customers would react to price changes), but they don't provide any evidence that such results could be trusted or relied upon, esp. given that LLMs are known to hallucinate.
For what? There is no commercial service or product linked.
> They have no demo, you have to schedule a demo with them.
Its open source, you can install it and run the examples.
Or modify them and run your own (the documentation for the simulation config could be a lot better, sure.)
> Then they mention examples (like how simulated customers would react to price changes), but they don't provide any evidence that such results could be trusted or relied upon, esp. given that LLMs are known to hallucinate.
Its not a commercial service offering validated predictive simulations, its an open source project for setting up and running agent-based simulations, which you need before you can validate and calibrate them.
Simulatrex is an open-source project focused on Generative Agent-Based Modeling (GABM), utilizing large language models for more accurate simulations. It's designed for researchers and developers interested in exploring human behavior and social dynamics. GABM in Simulatrex enhances agents with cognitive capabilities, allowing for more realistic decision-making processes in simulations. This tool is particularly useful in social sciences, policy analysis, and digital service design, offering a platform for innovative and relevant experimentation in a variety of settings.
In an agent based simulation, this tool purport to make the simulated agent behave more like real humans, using LLMs to make simulated agents take decisions at each simulated decision point. I gather that they think this will be a more realistic approximation of real human decision making, than whatever heuristics other simulations use.
Simulations usually involve a set of simulated agents, with an environment and some events that are going to occur. Depending on the simulation type, at each step or scheduled event, agents will be given an opportunity to react to the current environment. In a simple simulation of markets with a randomized allocation of wares, you could have an agent do something dumb like "sell my most expensive ware, buy the cheapest ware". Then you'd run this strategy to see what happens and analyze the simulation over time, say to look at what wealth distribution turns out to be.
In a more complicated simulation, you could try to actually model a set of real world actors using real world data as the initial environment seed data. Then you could use the same dumb heuristic I described, or more elaborate ones (like complex decision trees, etc). Or in their case, LLM based decision making.
Basically, simulation tools make it easier to write whatever scenario you have in mind, and code up how agents and the world work. Like a framework that handles the redundant and technical details of making a simulation... Then it's up to you to decide what your simulation will look like, so really it's an open ended tool to try and make informed guesses. The better the modelling of the real world problem is, presumably the better the fidelity of the simulation.
> The better the modelling of the real world problem is, presumably the better the fidelity of the simulation.
But that's exactly my concern: LLMs are known to hallucinate. How can we make sure the simulated agents are acting like actual humans? It can't be just a "cool"
tool you add to your company arsenal; it must actually provide some value.
Well, idk. But like any approximation I guess you can check if it fits prior controls better than your existing models. It's not like existing stuff is necessarily better.
I have no horse in this. You just asked for explanations of what their project is about. It seemed obvious to me so I took a stab at providing context.
The hallucination may not be a bug but a form regularisation that will helps sample behaviour and that itself might be of great use as it might act as a proxy for focus groups
Then they mention examples (like how simulated customers would react to price changes), but they don't provide any evidence that such results could be trusted or relied upon, esp. given that LLMs are known to hallucinate.
Flagged.