Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shorthand for built-in operations #109

Closed
Tracked by #130
tpluscode opened this issue Nov 21, 2018 · 4 comments
Closed
Tracked by #130

Shorthand for built-in operations #109

tpluscode opened this issue Nov 21, 2018 · 4 comments

Comments

@tpluscode
Copy link
Contributor

tpluscode commented Nov 21, 2018

The current way of importing code from barnard59-base, barnard59-formats and barnard59-protocols seems a little verbose and repetitive to me. How about if instead of typing the code:link/code:type one could just have a set of well-known identifiers?

For example

@prefix b59: <http://example.com/barnard59/base/>

<step> p:operation b59:map .

could be interpreted as the more verbose

<step> p:operation [
  code:link <node:barnard59-base#map>;
  code:type code:ecmaScript
];
@bergos
Copy link
Contributor

bergos commented Nov 22, 2018

How would you determine the datatype? Would b59:map contain the link and type? It's not yet implemented, but maybe we allow reading template literals from the file system. It should be possible to handle that case.

@tpluscode
Copy link
Contributor Author

tpluscode commented Nov 22, 2018

How would you determine the datatype?

Do you mean the code:type? For the defaults it will always be ecmaScript, right?

Would b59:map contain the link and type?

Yes, I was thinking that the easiest way, and requiring no changes to pipeline code, would be to merge the pipeline graph with a graph containing all those built-in operations. This way it would also be trivially possible to keep a separate RDF file with shared and add a way for consumer to load additional graphs into the definition.

The downside is having to parse a bunch of additional triples, even if most of them are not used in a given pipeline.

Otherwise I was taking about something more sophisticated where we destructure the URI to figure out what built-in operation is referenced and insert those triples into the definition graph. Something like a extracting variables from a URI template /barnard59/{package}/{func}. So, the appearance of the aforementioned <http://example.com/barnard59/base#map> would result in equivalent to

INSERT DATA
{
  <http://example.com/barnard59/base#map> 
    code:link <node:barnard59-base#map> ;
    code:type code:ecmaScript .
}

This method also does not affect how the definition is the processed but requires more additional code to parse the URIs (URI Template or regex?) and seems somewhat more magical and less extensible.

maybe we allow reading template literals from the file system

I do not understand that

@bergos
Copy link
Contributor

bergos commented Nov 23, 2018

I think it should be possible to have some kind of smart graph loader or triple generator. Where do you think that feature should be placed? I was thinking of between the cli tool and the pipeline code. The cli tool uses a function like loadExpandedPipelineGraph, which contains the logic. The logic should be reusable, so we can also use a triplestore as source for the additional triples, if we don't create them on the fly. But i would separate it from the core, to keep the complexity in that package low. Also allows us to work on it later, without breaking changes in the core.

@tpluscode tpluscode transferred this issue from zazuko/barnard59-core Jun 20, 2023
@tpluscode
Copy link
Contributor Author

Closing in favor of #131

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants