"REST" is our industry's most successful collective delusion: everyone knows it's wrong, everyone knows we're using it wrong, and somehow that works better than being right.
Exactly. In UDA, each Movie entity belongs to a specific business domain. Universality isn't an inherent truth, it's a social alignment within a group, useful only to the extent that it helps solve shared problems.
The concept is you model the core of your application and build it at the same time, using declarative tools, and project additions layers from this definition. The underlying data model is extendable via, well, extensions. These extend the DSL schema.
It's not conceptually a knowledge graph in the same way, but you can introspect essentially everything about your application. However, resources can be given data layers which define how they map to underlying storage, and you could use all of this information only as static information to derive additional things from, or you could just...well, use it. i.e `Ash.read(Resource)` yielding the table data. Our query engine has the same semantics they describe where you don't explicitly join etc.
You can generate charts and graphs, including things like policy flow charts.
---
Ultimately I've found that modeling tools like UML that can't simultaneously actually execute that model (i.e act as the application itself) are always insufficient and/or have massive impedance mismatches once rubber meets the road. The point is to effectively reimagine this as "what if we use these modeling principles, declaratively, from the ground up".
Factor in that building essentially any server-side tooling without Elixir (BEAM) is a bad idea in my view, you end up with "lets just make this the way we build apps, and do it in Elixir". It's been very powerful and we're continuing to progress on it.
It is important in UDA for the data models to be part of the same knowledge graph as the data container representations and the mappings, and eventually the instance data too. Our metamodel Upper is strongly inspired from RDFS, SHACL, and OWL in that respect.
UDA does not believe in the existence of universal data entities. We embrace the idea that 2+ teams may have different opinions on how to represent the world. We are focused on the discovery of existing entities across systems and their reusability through extensibility. We believe that automation of the projections will be key for teams to align on defining some entities, where it makes sense.
> The whole point of GraphQL is to create a unified view of something, not to have 23 different versions of "Movie".
GraphQL is great at federating APIs, and is a standardized API protocol. It is not a data modeling language. We actually tried really hard with GraphQL first.
Great question. It really depends on the projection. For example, the projections to GraphQL and Java are mostly limited to what can be expressed there. But the projection to SHACL has access to all of SPARQL Constraints, which is what's used for the bootstrapping knowledge graph. We are looking into being able to do more runtime validation for data in the warehouse.
got it, thanks. makes sense that it depends on the projection target. SHACL+SPARQL seems like the strongest runtime check layer then. for projections like graphql or java where enforcement is weaker, is there any way to inject runtime guards or contract tests as part of the generated code? or is the idea to keep enforcement external and just let uda define the schema canonically?
We have a little bit of runtime checks in the Java projections but it's still tied to SHACL/SPARQL through the Jena library. We are exploring ways to keep using SHACL at scale for advanced data profiling, and also try to identify a subset of SHACL constraints which would be able to compile down to SQL constraints for validating directly against the data warehouse.
> I wonder how they deal with versioning or breaking changes to the model.
Versioning is permission to break things.
Although it is not currently implemented in UDA yet, the plan is to embrace the same model as Federated GraphQL, which has proved to work very well for us (think 500+ federated GraphQL schemas). In a nutshell, UDA will actively manage deprecation cycles, as we have the ability to track the consumers of the projected models.
That is a lot of subgraphs. Am I understanding correctly then that under UDA developers fulfill the UDA spec in whatever language they’re using, and then there’s some kind of middleware that will handle serving GraphQL queries? How are mutations represented? And how are other GraphQL-specific idioms expressed (like input parameters, nodes/edges/connections/etc.)? Is it just a subset of GraphQL that is supported?
I manage a much smaller federation where I work, and we have a lot of the same ideals I think in terms of having some centralized types that the rest of the business recognizes across the board. Right now we accomplish that within a set of “core” subgraphs that define these types, while our more product-focused ones implement their own sets of types, queries and mutations and can extend the core ones as it makes sense to.
The price is in the 500+ domain graph services federated into our GraphQL enterprise gateway, which will all be exposed to Sphere through UDA. That's real.