Wednesday, February 20, 2019

Yet a further reason for semantically querying with Prolog.

In my work, over the past year, it has become increasingly clear to me that not only my queries are reused, but prortions of the queries are reused as well.  I haven't found any way to call a named subquery from within SPARQL, but in Prolog it is not only possible, but almost expected.

But to be more particular, what I have been needing are whole fragments of consecutive basic graph patterns.  So if a query select statement is made up of 30 BGP's, I might need to reuse a consecutive group of 10 of them.

SPARQL, is a great start, historically, but what I'm looking for is a way to create abstraction barriers - just like when I do procedural coding.

Thursday, August 10, 2017

Yet another reason for querying RDF with Prolog.

In a previous work position, I enjoyed querying entirely in Allegro Prolog rather than SPARQL.  At some point, I found that I needed the performance of their better query optimizer associated with their implementation of SPARQL.  Then I wrote my own layer over their SPARQL that made it look much like their Prolog.  In other posts I've talked about some of the advantages of a semantic Prolog for RDF queries.  In my current work, however, I've needed to use much more SPARQL for various reasons, and in that process, I've found an additional reason to value Prolog as a semantic query mechanism.   I have several issues with SPARQL, but, perhaps the biggest is the fact that it takes graph data and returns flat tuples - essentially spreadsheet data.  The CONSTRUCT construct is very useful that way but all of my present semantic query needs revolve around pulling semantic data into my application environments.  Prolog, on the other hand, can return nested data - lists within lists within lists - using bagof and setof.

Tuesday, January 28, 2014

On automatically inducing ontologies


It would be a lot more feasable to induce ontologies if we agreed on what it meant have a good ontology.  I claim the crafting a good ontology is an AI- hard or in another words a human-hard task.  I've been a little bit surprised in the last few years of the numbers of request for proposals for the automatic induction of ontologies.  I would've much rather seen that money spent on us deciding what it meant to do good data model design.  Once we know what a good ontology is, we can properly judge attempts to automatically induce such things. But it's clear to me that the automatic induction of  ontologies is also NLP-hard :^).  And Google Translate isn't going to get us  to full natural language understanding or anything close to it.  I claim this will require handcrafted ontologies.  So the bottom line to me is that once we've completely modeled our practical world, we will be prepared to automatically induce that model :^).

Thursday, December 5, 2013

THE MAGICAL NATURE OF ALLEGROGRAPH PROLOG

While SPARQL is becoming more magical, let's take time to ask "Is Franz' AGProlog already magical"?  I mean magical in the sense of magic properties in SPARQL (http://www.w3.org/wiki/SPARQL/Extensions/Computed_Properties).

Franz, I believe, is certainly doing the right thing by directly supporting magical properties in AllegroGraph for the SPARQL-centric subset of their customer base.  This sort of magic corrects some of what I don't care for with SPARQL's FILTER syntax.  But, let's not forget that Franz' Prolog has been magic-cabable from the day that they released their "q" and "q-" Prolog functors in support of their semantic support for Prolog.  I've been using their Prolog magically for a couple of years by creating Prolog adapter rules like the following:

(<- (q- ?POINT_1 !st:myNonProprietaryPointAfter ?POINT_2) ;;rule conclusion.
  (point-after ?POINT_1 ?POINT_2)) ;;rule premise; "point-after" is a Franz-supplied Prolog temporal functor.

This rule is invoked every time that basic semantic store access fails to find a matching triple/statement that declaratively asserts the "beforeness".  So in magical fashion, I first try to grab it from the store, then I try to compute it.

(select (?POINT_1 ?POINT_2)
  (q- ?POINT_1 !st:endsBeforeStartingOf ?POINT_2))

It is a bit of extra work for me to write these adaptor rules, but this allows me to name my magic properties as I choose.  In particular, I have my own Allen interval overlap properties that I am fond of and that happen to be vendor neutral.  Yet I tie into their Prolog's magic-enabling functors as in the point-after example above.

They could provide their own proprietary adapters for those who might like the out-of-the-box magic simplicity, but I'm quite happy with the vender-neutrality of the current state.

It appears that their magic SPARQL properties have corresponding Prolog functors.  So they have a good claim on temporal, geo-spatial, and social network analysis magic for both SPARQL and Prolog.  My experience is only with the Prolog magic-enabling functors.

Bottom line:  AllegroGraph Prolog is quite magical for my purposes.  Perhaps it was the first magical-enabling implementation.

Saturday, November 9, 2013

Congrats to Franz again on GRUFF!  If I were more emotional I would weep with joy over the perfection of the reification display!  

I haven't seen another graph display that gets statement reification right, so I consider this a big moment in the semantic community.

I've met lots of reification haters out there and this will go along ways toward healing their notions.   Using Prolog instead of SPARQL will, of course, do much.  Then they just need to understand that it doesn't have to be slow.   At my last position with our home-brewed architecture,  we were running queries of 70 BGP's and each one of the BGPs was reified with provenance.  Our customer required that we query using the provenance on every statement/BGP.  We had compact quads.  We loaded bloated non-compact reification - in the name of W3C compliance.  Which seems silly when we could have used statement-ID-based nquads like I'm trying to do now.

Franz is working now on loading compact quints (<s> <p> <o> <g> <tid>) from files, but it is presently very doable using proceedural code.


Superbness!  Great reification display!

Please see my attachment.  It is meant to encode "This morning, Jim planned on running to Costco before 6:00."



Monday, November 4, 2013


The Semanic Effort



Basic Information:

The Semanic effort is an attempt to create a small experimental semantic web featuring formal semantic modeling and integration of genealogical data. It employs some ambitious/radical temporal reasoning techniques and high expressivity. In particular, it boldly reifies all times as a shared library of time interval instances to boost the expressivity of temporal reasoning. The name is meant to be a sticky/memorable pun that suggests a somewhat manic ambition in combining many difficult-to-reconcile design goals and the hated single ontology.
The most recent ontology is to be found at http://semanic.org/OntDef/08-26-12/SemanticGEDCOM.owl.
The time instance data is found at "http://semanic.org/RefDat/09-26-12/Time/TimeWeb.owl". This is a large file. You may want to individually load the 100-year files.
To submit a SPARQL or Prolog query against the genealogical data, go to this link.
  • Peterson, E., Reified Literals: A Best Practice Candidate Design Pattern for Increased Expressivity in the Intelligence Community, Semantic Technology for Intelligence, Defense, and Security, Fairfax, VA, 2010