Hi all, Rafael here

Hi all,

I have been a software developer for more than two decades now, and since very early in my career I have been interested in raising the level of abstraction of programming languages, more specifically in the context of business applications.

I develop another opensource project called Cloudfier that shares many of the same traits found in GenerativeObjects, however my audience is regular software developers, not the “citizen developer”, as most low-code approaches tend to target.

But why am I here? Well, I think there are many components that a model-driven tool like Cloudfier shares with low-code tools such as GenerativeObjects (and also with tools for domain specific language-based development), so I will be looking for opportunities to find synergies and technology sharing opportunities between GO and my own projects.

One obstacle may be technology stack choices. GO is .Net/CLR and Cloudfier is Java/JVM.

One thing that should help here is a design based on APIs and distributed services. Not sure that is something the existing GO already does or something that is in the plans/interesting for the new GO.

Another strategy is to define conceptual APIs that are implementation independent (having a reference implementation in stack X, Y or Z is fine and actually important, as long as the RI stack’s idiosyncrasies don’t creep up into the conceptual APIs) and can have bindings on multiple technologies. This way, at least we get to share the same conceptual framework, and interoperability becomes easier.

Cheers,

Rafael
https://blog.abstratt.com/rafael-chaves/

1 Like

Thank you @rafael for introducing yourself and for beeing here with us !

This feels great ! Yes I believe that co-creating is the way to go, and co-creating not only on a single project, but across multiple projects, is even more powerful. It enforces the need to design solutions in a modular and inter-operable way. So big YES to this!

Generative Objects is fully built on JSON/REST APIs, so it is pretty forward to communicate with Generative Objects services from any technology.

On the technological stack, it is .NET, but the project is open to be migrated to another more open technology. It could be java or nodejs, or another technology, we would need to figure this out. By the way, I am anticipating here, but I can foresee an evolution of Generative Objects towards beeing a decentralized platform, and supporting the modelling and generation of decentralized applications. By decentralized, I mean application that don’t rely on server implementation anymore (or much less) and mainly runs on client side, with also distributed data storage. So nodejs could be a perfect technological choice to build fully-decentralized, or mixed solutions. More on this later :slight_smile:

Nevertheless, the need to be inter-operable with others technologies than the one use to natively build GO is still present.

Not sure I got this one. Isn’t it enough to have full JSON/REST API to be interoperable ?

For the rest, I guess when there will common interest on some topic for both Cloudfier and Generative Objects, we will then have the opportunity to brainstorm and co-create!

Rewriting may be too hard. An alternative, more incremental, approach would be to carve as much value of the existing code as possible, maybe building remote-friendly interfaces around subsystems (so their features are available to external clients), and later breaking those subsystems into their own independently deployable modules. Rewriting this or that module on occasion would be a more feasible task, but some may never need rewriting.

What are the driving forces here, Walter? Working offline? As long as it is still compatible with the other end of the spectrum, a thin-client model, where all interesting computation is performed elsewhere by internet scale services (only a web browser required), sounds like a good thing.

If I understand you, you are considering using NodeJS/Deno as the technology to glue together the different modules/services implemented in varied stacks, and maybe to provide a facade API to provide access to them. Something like that?

It should be - I mentioned that as an alternative in case the approach of using technology-independent service interfaces were not acceptable/interesting/compatible with GO’s design.

Thanks!

Rafael

1 Like

Yes indeed, there can be a iterative strategy for migrating. To be honnest, I don’t know if it is required. It all depends also on the community and discussions we can have around this. And moving to .NET Core will already allow deploying on Linux, so no infrastructure contraints to Windows. Could also be migrating the code generation template for target applications code, and not necesary all the components.

Yes it is all good this way. However, there is this idea for a new paradigm shift, discussed and tackled by other technologies, to move to (almost) a server less internet. Check this : https://ruben.verborgh.org/blog/2017/12/20/paradigm-shifts-for-the-decentralized-web/ or this : https://inrupt.com/vision . I have already integrated a connector to SOLID. But we can discuss this topic in another post in Vision category.

I see, remember now from discussions on the Strumenta community. Is that something you want to pursue as the deployment model, or as a deployment model, and cloud-based deployments with thin clients are still intended to be supported?

Cheers,

Rafael

This is the beauty of code generation, both models can exist side by side. So both models will be supported. This would be a huge work to support with model interpretation, especially for a so major change as moving from a client-server architecture to a decentralized architecture.

The meta-model should be pretty the same, to maybe some few exception, as the current meta-model already supports multiple data-sources and therefore is ready to support potentially as many data-sources as users of the application. What will change are the target generated architecture and therefore the code generation templates. The design done server side is already done in a way that it could be in big parts replicated and adapted client side, moving from C# to javascript, and doing some adjustments.

We are starting a real brainstorming talk here, I will summarize all this in an appropriate post !

Thank you @rafael for triggering this discussion :wink:

Hi Walter - just to make sure I am 100% clear, I was talking about the GO tooling itself, as opposed to applications built using GO. Will the design for the GO tooling as you imagine be compatible with zero-install, cloud-centric use cases? I got the impression the SOLID stuff implies doing more stuff locally, but I am possibly just misunderstanding it. EDIT: just reading the article on SOLID you linked to. So, my question now is: do you plan to go all in in adopting the SOLID approach, or would more conventional architectures for deploying the GO tooling be meant to be supported as well?

1 Like

Well both case will be possible, when the 2 architectures / deployment models will exist. We then would leave it to the user to decide which version to use !

Yes indeed, much more stuff done locally. And for instance, it probably makes sense to keep the code generation part as as server service, not a local service ! We would then have a mixed architecture. It is pretty early stage though, I anticipate many discussions around the topic ! And also the motivations for going to one or the other models

1 Like

Hi @rafael, Welcome, and good to see your inputs here.

2 Likes