work in progress [might break]

This commit is contained in:
Leon van Kammen 2023-09-05 19:14:10 +02:00
parent 47e9db2b52
commit ba8f3155bb
4 changed files with 893 additions and 559 deletions

View file

@ -82,24 +82,43 @@ value: draft-XRFRAGMENTS-leonvankammen-00
<p>This draft offers a specification for 4D URLs &amp; navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and <a href="https://visual-meta.info">visual-meta</a>.<br></p>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and BibTeX notation.<br></p>
<blockquote>
<p>Almost every idea in this document is demonstrated at <a href="https://xrfragment.org">https://xrfragment.org</a></p>
</blockquote>
<section data-matter="main">
<h1 id="introduction">Introduction</h1>
<p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?<br>
Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
However, thru the lens of authoring their lowest common denominator is still: plain text.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:<br></p>
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br>
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br></p>
<ol>
<li>addressibility and navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using BiBTeX (<a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
<li>hasslefree tagging across text and spatial objects using <a href="https://en.wikipedia.org/wiki/BibTeX">BibTeX</a> &lsquo;tags&rsquo; as appendix (see <a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
</ol>
<blockquote>
<p>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</p>
</blockquote>
<h1 id="core-principle">Core principle</h1>
<p>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br></p>
<blockquote>
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote>
<p>Let&rsquo;s always focus on average humans: the &lsquo;fuzzy symbolical mind&rsquo; must be served first, before serving the greater <a href="https://en.wikipedia.org/wiki/Borg">&lsquo;categorized typesafe RDF hive mind&rsquo;</a>).</p>
<blockquote>
<p>Humans first, machines (AI) later.</p>
</blockquote>
<h1 id="conventions-and-definitions">Conventions and Definitions</h1>
<table>
@ -133,7 +152,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints (<code>#pos=0,0,0&amp;t=1,100</code> e.g.)</td>
<td>URI Fragment with spatial hints like <code>#pos=0,0,0&amp;t=1,100</code> e.g.</td>
</tr>
<tr>
@ -148,17 +167,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene (<code>#q=cube</code>)</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <code>#q=cube</code></td>
</tr>
<tr>
<td>visual-meta</td>
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text which is indirectly visible/editable in XR.</td>
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>opposite of networked metadata (RDF/HTML request-fanouts easily cause framerate-dropping, hence not used a lot in games).</td>
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
</tr>
<tr>
@ -183,15 +202,6 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
</tbody>
</table>
<h1 id="core-principle">Core principle</h1>
<p>XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-to-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br></p>
<blockquote>
<p>&ldquo;When a car breaks down, the ones without turbosupercharger are easier to fix&rdquo;</p>
</blockquote>
<h1 id="list-of-uri-fragments">List of URI Fragments</h1>
<table>
@ -307,11 +317,11 @@ This also means that the repair-ability of machine-matters should be human frien
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
<strong>replace the current scene</strong> with a new one (<code>other.fbx</code>).</p>
<strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>.</p>
<h1 id="embedding-3d-content">Embedding 3D content</h1>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects (<code></code>) which embeds remote &amp; local 3D objects (<code></code>) (without) using queries:</p>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects <code></code> which embeds remote &amp; local 3D objects <code></code> (without) using queries:</p>
<pre><code> +--------------------------------------------------------+ +-------------------------+
| | | |
@ -334,55 +344,90 @@ In case of <code>buttonA</code> the end-user will be teleported to another locat
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br>
Resizing will be happen accordingly to its placeholder object (<code>aquariumcube</code>), see chapter Scaling.<br></p>
Resizing will be happen accordingly to its placeholder object <code>aquariumcube</code>, see chapter Scaling.<br></p>
<h1 id="text-in-xr-tagging-linking-to-spatial-objects">Text in XR (tagging,linking to spatial objects)</h1>
<p>We still think and speak in simple text, not in HTML or RDF.<br>
It would be funny when people would shout <code>&lt;h1&gt;FIRE!&lt;/h1&gt;</code> in case of emergency.<br>
Given the myriad of new (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
The most advanced human will probably not shout <code>&lt;h1&gt;FIRE!&lt;/h1&gt;</code> in case of emergency.<br>
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
Ideally metadata must come <strong>later with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>in another</strong> file.<br></p>
<blockquote>
<p>Humans first, machines (AI) later.</p>
<p>Humans first, machines (AI) later (<a href="#core-principle">core principle</a></p>
</blockquote>
<p>This way:</p>
<ol>
<li>XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <a href="https://visual.meta.info">visual-meta</a>).</li>
<li>XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX</li>
<li>inline BibTeX is the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great but optional (and too verbose for the spec-usecases).</li>
<li>XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX &lsquo;tags&rsquo;</li>
<li>inline BibTeX &lsquo;tags&rsquo; are the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great (but fits better in the application-layer)</li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <a href="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <a href="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <a href="#core-principle">the core principle</a>)</li>
</ol>
<p>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BiBTeX-tags</strong> :</p>
<p>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BibTeX-tags</strong> :</p>
<pre><code> +--------------------------------------------------+
| My Notes |
| |
| The houses seen here are built in baroque style. |
| |
| @house{houses, &lt;----- XR Fragment triple/tag: tiny &amp; phrase-matching BiBTeX
| @house{houses, &lt;----- XR Fragment triple/tag: phrase-matching BibTeX
| url = {#.house} &lt;------------------- XR Fragment URI
| } |
+--------------------------------------------------+
</code></pre>
<p>This sets up the following associations in the scene:</p>
<p>This allows instant realtime tagging of objects at various scopes:</p>
<table>
<thead>
<tr>
<th>scope</th>
<th>matching algo</th>
</tr>
</thead>
<tbody>
<tr>
<td><b id="textual-tagging">textual</b></td>
<td>text containing &lsquo;houses&rsquo; is now automatically tagged with &lsquo;house&rsquo; (incl. plaintext <code>src</code> child nodes)</td>
</tr>
<tr>
<td><b id="spatial-tagging">spatial</b></td>
<td>spatial object(s) with <code>&quot;class&quot;:&quot;house&quot;</code> (because of <code>{#.house}</code>) are now automatically tagged with &lsquo;house&rsquo; (incl. child nodes)</td>
</tr>
<tr>
<td><b id="supra-tagging">supra</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, named &lsquo;house&rsquo;, are automatically tagged with &lsquo;house&rsquo; (current node to root node)</td>
</tr>
<tr>
<td><b id="omni-tagging">omni</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name &lsquo;house&rsquo;, are automatically tagged with &lsquo;house&rsquo; (too node to all nodes)</td>
</tr>
<tr>
<td><b id="infinite-tagging">infinite</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name &lsquo;house&rsquo; or &lsquo;houses&rsquo;, are automatically tagged with &lsquo;house&rsquo; (too node to all nodes)</td>
</tr>
</tbody>
</table>
<p>This empowers the enduser spatial expressiveness (see <a href="#core-principle">the core principle</a>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX &lsquo;tags&rsquo; (humans first, machines later) is also demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail.</p>
<ol>
<li><b id="textual-tagging">textual tag</b>: text or spatial-occurences named &lsquo;houses&rsquo; is now automatically tagged with &lsquo;house&rsquo;</li>
<li><b id="spatial-tagging">spatial tag</b>: spatial object(s) with class:house (#.house) is now automatically tagged with &lsquo;house&rsquo;</li>
<li><b id="supra-tagging">supra-tag</b>: text- or spatial-object named &lsquo;house&rsquo; (spatially) elsewhere, is now automatically tagged with &lsquo;house&rsquo;</li>
<li>The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: <code>[text, spatial, text+spatial, supra, omni, infinite]</code></li>
<li>The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</li>
</ol>
<p>Spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted, links can be manipulated by the user.</p>
<blockquote>
<p>The simplicity of appending BibTeX (humans first, machines later) is demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail, and makes it perfect for GUI&rsquo;s to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</p>
<p>NOTE: infinite matches both &lsquo;house&rsquo; and &lsquo;houses&rsquo; in text, as well as spatial objects with <code>&quot;class&quot;:&quot;house&quot;</code> or name &ldquo;house&rdquo;. This multiplexing of id/category is deliberate because of <a href="#core-principle">the core principle</a>.</p>
</blockquote>
<h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2>
@ -419,7 +464,7 @@ Its implications are that local/remote responses can now:</p>
</blockquote>
<p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
To keep XR Fragments a lightweight spec, BiBTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</p>
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</p>
<blockquote>
<p>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</p>
@ -473,10 +518,16 @@ This allows rich interaction and interlinking between text and 3D objects:</p>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li>
</ol>
<h2 id="bibtex-as-lowest-common-denominator-for-tagging-triple">BibTeX as lowest common denominator for tagging/triple</h2>
<h2 id="bibtex-as-lowest-common-denominator-for-tagging-triples">BibTeX as lowest common denominator for tagging/triples</h2>
<p>The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective).
BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness &amp; simplicity:</p>
<blockquote>
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote>
<p>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br>
BibTeX-appendices are already used in the digital AND physical world (academic books, <a href="https://visual-meta.info">visual-meta</a>), perhaps due to its terseness &amp; simplicity.<br>
In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (which has never leaked into the physical book-world):</p>
<ol>
<li><b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li>
@ -487,7 +538,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<thead>
<tr>
<th>characteristic</th>
<th>Plain Text (with BibTeX)</th>
<th>UTF8 Plain Text (with BibTeX)</th>
<th>RDF</th>
</tr>
</thead>
@ -499,6 +550,12 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<td>extrospective</td>
</tr>
<tr>
<td>structure</td>
<td>fuzzy (sensemaking)</td>
<td>precise</td>
</tr>
<tr>
<td>space/scope</td>
<td>local</td>
@ -518,8 +575,8 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
</tr>
<tr>
<td>markup language(s)</td>
<td>no (appendix)</td>
<td>markup language</td>
<td>just an appendix</td>
<td>~4 different</td>
</tr>
@ -532,61 +589,55 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<tr>
<td>easy to copy/paste content+metadata</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>easy to write/repair</td>
<td>easy to write/repair for layman</td>
<td>yes</td>
<td>depends</td>
</tr>
<tr>
<td>easy to parse</td>
<td>easy to (de)serialize</td>
<td>yes (fits on A4 paper)</td>
<td>depends</td>
</tr>
<tr>
<td>infrastructure storage</td>
<td>infrastructure</td>
<td>selfcontained (plain text)</td>
<td>(semi)networked</td>
</tr>
<tr>
<td>tagging</td>
<td>yes</td>
<td>yes</td>
<td>freeform tagging/annotation</td>
<td>yes, terse</td>
<td>yes, verbose</td>
</tr>
<tr>
<td>freeform tagging/notes</td>
<td>can be appended to text-content</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>specialized file-type</td>
<td>no</td>
<td>copy-paste text preserves metadata</td>
<td>yes</td>
</tr>
<tr>
<td>copy-paste preserves metadata</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>emoji</td>
<td>yes</td>
<td>depends</td>
<td>depends on encoding</td>
</tr>
<tr>
<td>predicates</td>
<td>free</td>
<td>pre-determined</td>
<td>semi pre-determined</td>
</tr>
<tr>
@ -602,7 +653,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
</tr>
<tr>
<td>terse categoryless predicates</td>
<td>terse non-verb predicates</td>
<td>yes</td>
<td>no</td>
</tr>
@ -615,11 +666,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
</tbody>
</table>
<blockquote>
<p>To serve humans first, human &lsquo;fuzzy symbolical mind&rsquo; comes first, and <a href="https://en.wikipedia.org/wiki/Borg">&lsquo;categorized typesafe RDF hive mind&rsquo;</a>) later.</p>
</blockquote>
<h2 id="xr-text-bibtex-example-parser">XR text (BibTeX) example parser</h2>
<h2 id="xr-text-w-bibtex-example-parser">XR Text (w. BibTeX) example parser</h2>
<p>Here&rsquo;s a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):</p>
@ -648,7 +695,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
.map( s =&gt; s.trim() ).join(&quot;\n&quot;) // be nice
.replace( /}@/, &quot;}\n@&quot; ) // to authors
.replace( /},}/, &quot;},\n}&quot; ) // which struggle
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BiBTeX
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c =&gt; c.trim() ) // actual processing:
.map( (s) =&gt; {
@ -694,11 +741,11 @@ xrtext.encode(text,meta) // multiplex text &amp; bibte
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</p>
<ul>
<ol>
<li>time/space: 3D object (current animation-loop)</li>
<li>text: TeXt object (including BiBTeX/visual-meta if any)</li>
<li>text: TeXt object (including BibTeX/visual-meta if any)</li>
<li>interlinked: Collected objects by visual-meta tag</li>
</ul>
</ol>
<h1 id="xr-fragment-queries">XR Fragment queries</h1>
@ -716,14 +763,17 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
<p>It&rsquo;s simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:</p>
<ol>
<li>queries are only executed when <b>embedded</b> in the asset/scene (thru <code>src</code>). This is to prevent sharing of scene-tampered URL&rsquo;s.</li>
<li>search words are matched against 3D object names or metadata-key(values)</li>
<li>queries are showing/hiding objects <strong>only</strong> when defined as <code>src</code> value (prevents sharing of scene-tampered URL&rsquo;s).</li>
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against 3D object names or custom metadata-key(values)</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against tags (BibTeX) inside plaintext <code>src</code> values like <code>@cube{redcube, ...</code> e.g.</li>
<li><code>#</code> equals <code>#q=*</code></li>
<li>words starting with <code>.</code> (<code>.language</code>) indicate class-properties</li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of 3D objects like <code>&quot;class&quot;:&quot;german&quot;</code></li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of (BibTeX) tags in XR Text objects like <code>@german{KarlHeinz, ...</code> e.g.</li>
</ol>
<blockquote>
<p>*(*For example**: <code>#q=.foo</code> is a shorthand for <code>#q=class:foo</code>, which will select objects with custom property <code>class</code>:<code>foo</code>. Just a simple <code>#q=cube</code> will simply select an object named <code>cube</code>.</p>
<p><strong>For example</strong>: <code>#q=.foo</code> is a shorthand for <code>#q=class:foo</code>, which will select objects with custom property <code>class</code>:<code>foo</code>. Just a simple <code>#q=cube</code> will simply select an object named <code>cube</code>.</p>
</blockquote>
<ul>
@ -732,13 +782,50 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
<h2 id="including-excluding">including/excluding</h2>
<p>|&ldquo;operator&rdquo; | &ldquo;info&rdquo; |
|<code>*</code> | select all objects (only allowed in <code>src</code> custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] <code>#</code> was executed)|
|<code>-</code> | removes/hides object(s) |
|<code>:</code> | indicates an object-embedded custom property key/value |
|<code>.</code> | alias for <code>class:</code> (<code>.foo</code> equals <code>class:foo</code> |
|<code>&gt;</code> <code>&lt;</code>| compare float or int number|
|<code>/</code> | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br><code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br> <code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
<table>
<thead>
<tr>
<th>operator</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>*</code></td>
<td>select all objects (only useful in <code>src</code> custom property)</td>
</tr>
<tr>
<td><code>-</code></td>
<td>removes/hides object(s)</td>
</tr>
<tr>
<td><code>:</code></td>
<td>indicates an object-embedded custom property key/value</td>
</tr>
<tr>
<td><code>.</code></td>
<td>alias for <code>&quot;class&quot; :&quot;.foo&quot;</code> equals <code>class:foo</code></td>
</tr>
<tr>
<td><code>&gt;</code> <code>&lt;</code></td>
<td>compare float or int number</td>
</tr>
<tr>
<td><code>/</code></td>
<td>reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <code>src</code>) (*)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = <code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br> <code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
</blockquote>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a>
<a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</a>
@ -763,7 +850,7 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
<li>therefore we we set <code>id</code> to <code>true</code> or <code>false</code> (false=excluder <code>-</code>)</li>
<li>and we set <code>root</code> to <code>true</code> or <code>false</code> (true=<code>/</code> root selector is present)</li>
<li>we convert key &lsquo;/foo&rsquo; into &lsquo;foo&rsquo;</li>
<li>finally we add the key/value to the store (<code>store.foo = {id:false,root:true}</code> e.g.)</li>
<li>finally we add the key/value to the store like <code>store.foo = {id:false,root:true}</code> e.g.</li>
</ol>
<blockquote>

View file

@ -95,7 +95,9 @@ value: draft-XRFRAGMENTS-leonvankammen-00
This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and [visual-meta](https://visual-meta.info).<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and BibTeX notation.<br>
> Almost every idea in this document is demonstrated at [https://xrfragment.org](https://xrfragment.org)
{mainmatter}
@ -107,19 +109,21 @@ However, thru the lens of authoring, their lowest common denominator is still: p
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br>
1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata
1. hasslefree tagging across text and spatial objects using [BiBTeX](https://en.wikipedia.org/wiki/BibTeX) ([visual-meta](https://visual-meta.info) e.g.)
1. hasslefree tagging across text and spatial objects using [BibTeX](https://en.wikipedia.org/wiki/BibTeX) 'tags' as appendix (see [visual-meta](https://visual-meta.info) e.g.)
> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
# Core principle
XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br>
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)).
> Humans first, machines (AI) later.
# Conventions and Definitions
|definition | explanation |
@ -128,16 +132,17 @@ Let's always focus on average humans: the 'fuzzy symbolical mind' must be served
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
|XR fragment | URI Fragment with spatial hints (`#pos=0,0,0&t=1,100` e.g.) |
|XR fragment | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. |
|src | (HTML-piggybacked) metadata of a 3D object which instances content |
|href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. |
|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
# List of URI Fragments
@ -184,11 +189,11 @@ Here's an ascii representation of a 3D scene-graph which contains 3D objects `
An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the `buttonA` and `buttonB`.<br>
In case of `buttonA` the end-user will be teleported to another location and time in the **current loaded scene**, but `buttonB` will
**replace the current scene** with a new one (`other.fbx`).
**replace the current scene** with a new one, like `other.fbx`.
# Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects (`◻`) which embeds remote & local 3D objects (`◻`) (without) using queries:
Here's an ascii representation of a 3D scene-graph with 3D objects `◻` which embeds remote & local 3D objects `◻` (without) using queries:
```
+--------------------------------------------------------+ +-------------------------+
@ -212,7 +217,7 @@ Here's an ascii representation of a 3D scene-graph with 3D objects (`◻`) which
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.<br>
Resizing will be happen accordingly to its placeholder object (`aquariumcube`), see chapter Scaling.<br>
Resizing will be happen accordingly to its placeholder object `aquariumcube`, see chapter Scaling.<br>
# Text in XR (tagging,linking to spatial objects)
@ -222,18 +227,18 @@ The most advanced human will probably not shout `<h1>FIRE!</h1>` in case of emer
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
Ideally metadata must come **later with** text, but not **obfuscate** the text, or **in another** file.<br>
> Humans first, machines (AI) later.
> Humans first, machines (AI) later ([core principle](#core-principle)
This way:
1. XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)).
1. XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX
3. inline BibTeX is the minimum required **requestless metadata**-layer for XR text, RDF/JSON is great (but fits better in the application-layer)
1. XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX 'tags'
3. inline BibTeX 'tags' are the minimum required **requestless metadata**-layer for XR text, RDF/JSON is great (but fits better in the application-layer)
5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)).
6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle))
7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle))
This allows recursive connections between text itself, as well as 3D objects and vice versa, using **BiBTeX-tags** :
This allows recursive connections between text itself, as well as 3D objects and vice versa, using **BibTeX-tags** :
```
+--------------------------------------------------+
@ -241,21 +246,29 @@ This allows recursive connections between text itself, as well as 3D objects and
| |
| The houses seen here are built in baroque style. |
| |
| @house{houses, <----- XR Fragment triple/tag: phrase-matching BiBTeX
| @house{houses, <----- XR Fragment triple/tag: phrase-matching BibTeX
| url = {#.house} <------------------- XR Fragment URI
| } |
+--------------------------------------------------+
```
This sets up the following associations in the scene:
This allows instant realtime tagging of objects at various scopes:
1. <b id="textual-tagging">textual tag</b>: text or spatial-occurences named 'houses' is now automatically tagged with 'house'
1. <b id="spatial-tagging">spatial tag</b>: spatial object(s) with `"class":"house"` (#.house) are now automatically tagged with 'house'
1. <b id="supra-tagging">supra-tag</b>: text- or spatial-object(s) named 'house' elsewhere, are automatically tagged with 'house'
| scope | matching algo |
|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <b id="textual-tagging">textual</b> | text containing 'houses' is now automatically tagged with 'house' (incl. plaintext `src` child nodes) |
| <b id="spatial-tagging">spatial</b> | spatial object(s) with `"class":"house"` (because of `{#.house}`) are now automatically tagged with 'house' (incl. child nodes) |
| <b id="supra-tagging">supra</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, named 'house', are automatically tagged with 'house' (current node to root node) |
| <b id="omni-tagging">omni</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes) |
| <b id="infinite-tagging">infinite</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes) |
This allows spatial wires to be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
This empowers the enduser spatial expressiveness (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail.
> The simplicity of appending BibTeX (humans first, machines later) is demonstrated by [visual-meta](https://visual-meta.info) in greater detail, and makes it perfect for HUDs/GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
1. The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: `[text, spatial, text+spatial, supra, omni, infinite]`
1. The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
> NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with `"class":"house"` or name "house". This multiplexing of id/category is deliberate because of [the core principle](#core-principle).
## Default Data URI mimetype
@ -285,7 +298,7 @@ Its implications are that local/remote responses can now:
> This significantly expands expressiveness and portability of human text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
To keep XR Fragments a lightweight spec, BiBTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).
> Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).
@ -339,8 +352,12 @@ This allows rich interaction and interlinking between text and 3D objects:
## BibTeX as lowest common denominator for tagging/triples
The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective).
BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness & simplicity:
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br>
BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.<br>
In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical book-world):
1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata
1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later
@ -348,6 +365,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
| characteristic | UTF8 Plain Text (with BibTeX) | RDF |
|------------------------------------|-------------------------------|---------------------------|
| perspective | introspective | extrospective |
| structure | fuzzy (sensemaking) | precise |
| space/scope | local | world |
| everything is text (string) | yes | no |
| leaves (dictated) text intact | yes | no |
@ -357,7 +375,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
| easy to write/repair for layman | yes | depends |
| easy to (de)serialize | yes (fits on A4 paper) | depends |
| infrastructure | selfcontained (plain text) | (semi)networked |
| freeform tagging | yes, terse | yes, verbose |
| freeform tagging/annotation | yes, terse | yes, verbose |
| can be appended to text-content | yes | up to application |
| copy-paste text preserves metadata | yes | up to application |
| emoji | yes | depends on encoding |
@ -367,73 +385,106 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
| terse non-verb predicates | yes | no |
| nested structures | no | yes |
## XR text (BibTeX) example parser
## XR Text (w. BibTeX) example parser
Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):
Here's a XR Text (de)multiplexer in javascript (which also consumes start/end-blocks like in visual-meta):
```
xrtext = {
decode: {
text: (str) => {
let meta={}, text='', last='', data = '';
decode: (str) => {
let meta={}, text='', bibtex = [], cur = meta, section = ''
regex= {
bibtex: /^@/,
section: { start: /@{(\S+)-start}/, suffix: /-(start|end)/},
prop: { key: /=.*?{/ , stop: /},/ },
tag: { start: /^@\S+[{,}]$/, stop: /}/ }
}
let reset = () => { bibtex = []; cur = meta }
str.split(/\r?\n/).map( (line) => {
if( !data ) data = last === '' && line.match(/^@/) ? line[0] : ''
if( data ){
if( line === '' ){
xrtext.decode.bibtex(data.substr(1),meta)
data=''
}else data += `${line}\n`
if( Object.keys(meta).length == 0 && !line.match(regex.bibtex) )
text += line+'\n'
if( line.match(regex.section.start) )
section = line.match(regex.section.start)
if( bibtex.length ){
bibtex.push(line)
token = bibtex.join('')
if( token.match( regex.prop.key ) && token.match(/},/) ){
value = token.substr( token.indexOf('{')+1, token.lastIndexOf('}') )
key = token.replace(/=.*/,'').trim()
cur[ key ] = value.replace(regex.prop.stop,'').trim()
token = token.lastIndexOf('}') == token.length-1
? ''
: token.substr( token.lastIndexOf('},')+2 )
bibtex = [ token + ' ']
}else if( token.match(regex.tag.stop) ) reset()
}else if( line.trim().match(regex.bibtex) ){
bibtex = [' ']
key = line.trim().match(regex.tag.start)[0]
if( key.match(regex.section.suffix) ) return
cur = ( cur[ key ] = {} )
if( section ){
cur.section = section[0].replace(regex.section.suffix,'')
.replace(/[@}{]/g,'')
}
}
text += data ? '' : `${line}\n`
last=line
})
return {text, meta}
},
bibtex: (str,meta) => {
let st = [meta]
str
.split(/\r?\n/ )
.map( s => s.trim() ).join("\n") // be nice
.replace( /}@/, "}\n@" ) // to authors
.replace( /},}/, "},\n}" ) // which struggle
.replace( /^}/, "\n}" ) // with writing single-line BiBTeX
.split( /\n/ ) //
.filter( c => c.trim() ) // actual processing:
.map( (s) => {
if( s.match(/(^}|-end})/) && st.length > 1 ) st.shift()
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} )
else s.replace( /(\w+)\s*=\s*{(.*)}(,)?/g, (m,k,v) => st[0][k] = v )
})
return meta
}
},
encode: (text,meta) => {
if( text === false ){
if (typeof meta === "object") {
return Object.keys(meta).map(k =>
typeof meta[k] == "string"
? ` ${k} = {${meta[k]}},`
: `${ k.match(/[}{]$/) ? k.replace('}','-start}') : `${k},` }\n` +
`${ xrtext.encode( false, meta[k])}\n` +
`${ k.match(/}$/) ? k.replace('}','-end}') : '}' }\n`
.split("\n").filter( s => s.trim() ).join("\n")
)
.join("\n")
}
return meta.toString();
}else return `${text}\n${xrtext.encode(false,meta)}`
str = text+"\n"
for( let i in meta ){
let item = meta[i]
str += `${i}\n`
for( let j in item ) str += ` ${j} = {${item[j]}}\n`
str += `}\n`
}
return str
}
}
var {meta,text} = xrtext.decode.text(str) // demultiplex text & bibtex
meta['@foo{'] = { "note":"note from the user"} // edit metadata
xrtext.encode(text,meta) // multiplex text & bibtex back together
var {meta,text} = xrtext.decode(str) // demultiplex text & bibtex tags
meta['@foo{'] = { "note":"note from the user"} // edit metadata
out = xrtext.encode(text,meta) // multiplex text & bibtex tags back together
```
> above can be used as a startingpoint for LLVM's to translate/steelman to any language.
> above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.
1. The XR Fragments spec (de)serializes does not aim to harden the BiBTeX format
2. Dumb, unnested BiBTeX: always deserialize to a flat lookuptable of tags for speed & simplicity ([core principle](#core-principle))
3. multi-line BibTex values should be supported
4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;bibtex=^@`
5. Be strict in sending (`encode()`) Dumb Bibtex (start/stop-section becomes a property) (*)
6. Be liberal in receiving, hence a relatively bigger `decode()` (also supports [visual-meta](https://visual-meta.info) start/stop-sections e.g.)
```
@{references-start}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
}
```
The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 & 5, as well as the [core principle](#core-principle)
```
@visual-meta{
version = {1.1},
generator = {Author 7.6.2 (1064)},
section = {visual-meta-header}
}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
section = {references}
}
```
# HYPER copy/paste
@ -441,9 +492,9 @@ The previous example, offers something exciting compared to simple copy/paste of
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
* time/space: 3D object (current animation-loop)
* text: TeXt object (including BiBTeX/visual-meta if any)
* interlinked: Collected objects by visual-meta tag
1. time/space: 3D object (current animation-loop)
1. text: TeXt object (including BibTeX/visual-meta if any)
1. interlinked: Collected objects by visual-meta tag
# XR Fragment queries
@ -458,24 +509,30 @@ Include, exclude, hide/shows objects using space-separated strings:
It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:
1. queries are only executed when <b>embedded</b> in the asset/scene (thru `src`). This is to prevent sharing of scene-tampered URL's.
2. search words are matched against 3D object names or metadata-key(values)
3. `#` equals `#q=*`
4. words starting with `.` (`.language`) indicate class-properties
1. queries are showing/hiding objects **only** when defined as `src` value (prevents sharing of scene-tampered URL's).
1. queries are highlighting objects when defined in the top-Level (browser) URL (bar).
1. search words like `cube` and `foo` in `#q=cube foo` are matched against 3D object names or custom metadata-key(values)
1. search words like `cube` and `foo` in `#q=cube foo` are matched against tags (BibTeX) inside plaintext `src` values like `@cube{redcube, ...` e.g.
1. `#` equals `#q=*`
1. words starting with `.` like `.german` match class-metadata of 3D objects like `"class":"german"`
1. words starting with `.` like `.german` match class-metadata of (BibTeX) tags in XR Text objects like `@german{KarlHeinz, ...` e.g.
> *(*For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
> **For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
* see [an example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
## including/excluding
|''operator'' | ''info'' |
|`*` | select all objects (only allowed in `src` custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] `#` was executed)|
|`-` | removes/hides object(s) |
|`:` | indicates an object-embedded custom property key/value |
|`.` | alias for `class:` (`.foo` equals `class:foo` |
|`>` `<`| compare float or int number|
|`/` | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br>`#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)<br> `#q=-cube` hides both object `cube` in the root-scene <b>AND</b> nested `skybox` objects |
| operator | info |
|----------|-------------------------------------------------------------------------------------------------------------------------------|
| `*` | select all objects (only useful in `src` custom property) |
| `-` | removes/hides object(s) |
| `:` | indicates an object-embedded custom property key/value |
| `.` | alias for `"class" :".foo"` equals `class:foo` |
| `>` `<` | compare float or int number |
| `/` | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by `src`) (*) |
> \* = `#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)<br> `#q=-cube` hides both object `cube` in the root-scene <b>AND</b> nested `skybox` objects |
[» example implementation](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js)
[» example 3D asset](https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192)
@ -499,7 +556,7 @@ Here's how to write a query parser:
1. therefore we we set `id` to `true` or `false` (false=excluder `-`)
1. and we set `root` to `true` or `false` (true=`/` root selector is present)
1. we convert key '/foo' into 'foo'
1. finally we add the key/value to the store (`store.foo = {id:false,root:true}` e.g.)
1. finally we add the key/value to the store like `store.foo = {id:false,root:true}` e.g.
> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx)

View file

@ -3,7 +3,7 @@
Internet Engineering Task Force L.R. van Kammen
Internet-Draft 4 September 2023
Internet-Draft 5 September 2023
Intended status: Informational
@ -20,8 +20,10 @@ Abstract
for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive
use of existing proven technologies like URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) and visual-meta
(https://visual-meta.info).
(https://en.wikipedia.org/wiki/URI_fragment) and BibTeX notation.
Almost every idea in this document is demonstrated at
https://xrfragment.org (https://xrfragment.org)
Status of This Memo
@ -38,7 +40,7 @@ Status of This Memo
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 7 March 2024.
This Internet-Draft will expire on 8 March 2024.
Copyright Notice
@ -48,16 +50,16 @@ Copyright Notice
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/
license-info) in effect on the date of publication of this document.
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
van Kammen Expires 7 March 2024 [Page 1]
van Kammen Expires 8 March 2024 [Page 1]
Internet-Draft XR Fragments September 2023
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
extracted from this document must include Revised BSD License text as
described in Section 4.e of the Trust Legal Provisions and are
provided without warranty as described in the Revised BSD License.
@ -65,25 +67,26 @@ Internet-Draft XR Fragments September 2023
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Conventions and Definitions . . . . . . . . . . . . . . . . . 3
3. Core principle . . . . . . . . . . . . . . . . . . . . . . . 4
2. Core principle . . . . . . . . . . . . . . . . . . . . . . . 3
3. Conventions and Definitions . . . . . . . . . . . . . . . . . 3
4. List of URI Fragments . . . . . . . . . . . . . . . . . . . . 4
5. List of metadata for 3D nodes . . . . . . . . . . . . . . . . 4
5. List of metadata for 3D nodes . . . . . . . . . . . . . . . . 5
6. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 5
7. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 5
7. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 6
8. Text in XR (tagging,linking to spatial objects) . . . . . . . 6
8.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 8
8.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 9
8.3. BibTeX as lowest common denominator for tagging/triple . 10
8.4. XR text (BibTeX) example parser . . . . . . . . . . . . . 11
9. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 13
10. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 13
10.1. including/excluding . . . . . . . . . . . . . . . . . . 14
10.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 14
10.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . 15
11. Security Considerations . . . . . . . . . . . . . . . . . . . 15
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15
13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 15
8.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 9
8.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 10
8.3. BibTeX as lowest common denominator for tagging/
triples . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.4. XR Text (w. BibTeX) example parser . . . . . . . . . . . 13
9. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 14
10. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 15
10.1. including/excluding . . . . . . . . . . . . . . . . . . 15
10.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 16
10.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . 17
11. Security Considerations . . . . . . . . . . . . . . . . . . . 17
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17
13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17
1. Introduction
@ -91,101 +94,108 @@ Table of Contents
introducing new dataformats?
Historically, there's many attempts to create the ultimate
markuplanguage or 3D fileformat.
However, thru the lens of authoring their lowest common denominator
However, thru the lens of authoring, their lowest common denominator
is still: plain text.
XR Fragments allows us to enrich existing dataformats, by recursive
use of existing technologies:
XR Fragments allows us to enrich/connect existing dataformats, by
recursive use of existing technologies:
1. addressibility and navigation of 3D scenes/objects: URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial
metadata
2. hasslefree tagging across text and spatial objects using BiBTeX
(visual-meta (https://visual-meta.info) e.g.)
2. hasslefree tagging across text and spatial objects using BibTeX
(https://en.wikipedia.org/wiki/BibTeX) 'tags' as appendix (see
visual-meta (https://visual-meta.info) e.g.)
van Kammen Expires 8 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023
| NOTE: The chapters in this document are ordered from highlevel to
| lowlevel (technical) as much as possible
2. Core principle
van Kammen Expires 7 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023
2. Conventions and Definitions
+===============+===========================================+
| definition | explanation |
+===============+===========================================+
| human | a sentient being who thinks fuzzy, |
| | absorbs, and shares thought (by plain |
| | text, not markuplanguage) |
+---------------+-------------------------------------------+
| scene | a (local/remote) 3D scene or 3D file |
| | (index.gltf e.g.) |
+---------------+-------------------------------------------+
| 3D object | an object inside a scene characterized by |
| | vertex-, face- and customproperty data. |
+---------------+-------------------------------------------+
| metadata | custom properties of text, 3D Scene or |
| | Object(nodes), relevant to machines and a |
| | human minority (academics/developers) |
+---------------+-------------------------------------------+
| XR fragment | URI Fragment with spatial hints |
| | (#pos=0,0,0&t=1,100 e.g.) |
+---------------+-------------------------------------------+
| src | (HTML-piggybacked) metadata of a 3D |
| | object which instances content |
+---------------+-------------------------------------------+
| href | (HTML-piggybacked) metadata of a 3D |
| | object which links to content |
+---------------+-------------------------------------------+
| query | an URI Fragment-operator which queries |
| | object(s) from a scene (#q=cube) |
+---------------+-------------------------------------------+
| visual-meta | visual-meta (https://visual.meta.info) |
| | data appended to text which is indirectly |
| | visible/editable in XR. |
+---------------+-------------------------------------------+
| requestless | opposite of networked metadata (RDF/HTML |
| metadata | request-fanouts easily cause framerate- |
| | dropping, hence not used a lot in games). |
+---------------+-------------------------------------------+
| FPS | frames per second in spatial experiences |
| | (games,VR,AR e.g.), should be as high as |
| | possible |
+---------------+-------------------------------------------+
| introspective | inward sensemaking ("I feel this belongs |
| | to that") |
+---------------+-------------------------------------------+
| extrospective | outward sensemaking ("I'm fairly sure |
| | John is a person who lives in oklahoma") |
van Kammen Expires 7 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
+---------------+-------------------------------------------+
| &#9723; | ascii representation of an 3D object/mesh |
+---------------+-------------------------------------------+
Table 1
3. Core principle
XR Fragments strives to serve humans first, machine(implementations)
later, by ensuring hasslefree text-to-thought feedback loops.
XR Fragments strives to serve (nontechnical/fuzzy) humans first, and
machine(implementations) later, by ensuring hasslefree text-vs-
thought feedback loops.
This also means that the repair-ability of machine-matters should be
human friendly too (not too complex).
| "When a car breaks down, the ones without turbosupercharger are
| "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind'
must be served first, before serving the greater 'categorized
typesafe RDF hive mind' (https://en.wikipedia.org/wiki/Borg)).
| Humans first, machines (AI) later.
3. Conventions and Definitions
+===============+=============================================+
| definition | explanation |
+===============+=============================================+
| human | a sentient being who thinks fuzzy, absorbs, |
| | and shares thought (by plain text, not |
| | markuplanguage) |
+---------------+---------------------------------------------+
| scene | a (local/remote) 3D scene or 3D file |
| | (index.gltf e.g.) |
+---------------+---------------------------------------------+
| 3D object | an object inside a scene characterized by |
| | vertex-, face- and customproperty data. |
+---------------+---------------------------------------------+
| metadata | custom properties of text, 3D Scene or |
| | Object(nodes), relevant to machines and a |
| | human minority (academics/developers) |
+---------------+---------------------------------------------+
| XR fragment | URI Fragment with spatial hints like |
| | #pos=0,0,0&t=1,100 e.g. |
+---------------+---------------------------------------------+
| src | (HTML-piggybacked) metadata of a 3D object |
| | which instances content |
+---------------+---------------------------------------------+
| href | (HTML-piggybacked) metadata of a 3D object |
| | which links to content |
+---------------+---------------------------------------------+
van Kammen Expires 8 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
| query | an URI Fragment-operator which queries |
| | object(s) from a scene like #q=cube |
+---------------+---------------------------------------------+
| visual-meta | visual-meta (https://visual.meta.info) data |
| | appended to text/books/papers which is |
| | indirectly visible/editable in XR. |
+---------------+---------------------------------------------+
| requestless | opposite of networked metadata (RDF/HTML |
| metadata | requests can easily fan out into framerate- |
| | dropping, hence not used a lot in games). |
+---------------+---------------------------------------------+
| FPS | frames per second in spatial experiences |
| | (games,VR,AR e.g.), should be as high as |
| | possible |
+---------------+---------------------------------------------+
| introspective | inward sensemaking ("I feel this belongs to |
| | that") |
+---------------+---------------------------------------------+
| extrospective | outward sensemaking ("I'm fairly sure John |
| | is a person who lives in oklahoma") |
+---------------+---------------------------------------------+
| &#9723; | ascii representation of an 3D object/mesh |
+---------------+---------------------------------------------+
Table 1
4. List of URI Fragments
+==========+=========+==============+============================+
@ -209,6 +219,13 @@ Internet-Draft XR Fragments September 2023
| xyz coordinates are similar to ones found in SVG Media Fragments
van Kammen Expires 8 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
5. List of metadata for 3D nodes
+=======+========+================+============================+
@ -218,14 +235,6 @@ Internet-Draft XR Fragments September 2023
| | | | fileformats & scenes |
+-------+--------+----------------+----------------------------+
| class | string | "class": | available through custom |
van Kammen Expires 7 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
| | | "cubes" | property in 3D fileformats |
+-------+--------+----------------+----------------------------+
| href | string | "href": | available through custom |
@ -264,24 +273,21 @@ Internet-Draft XR Fragments September 2023
user to interact with the buttonA and buttonB.
In case of buttonA the end-user will be teleported to another
location and time in the *current loaded scene*, but buttonB will
*replace the current scene* with a new one (other.fbx).
*replace the current scene* with a new one, like other.fbx.
van Kammen Expires 8 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
7. Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects
(&#9723;) which embeds remote & local 3D objects (&#9723;) (without)
&#9723; which embeds remote & local 3D objects &#9723; (without)
using queries:
van Kammen Expires 7 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
@ -306,41 +312,42 @@ Internet-Draft XR Fragments September 2023
Also, after lazy-loading ocean.com/aquarium.gltf, only the queried
objects bass and tuna will be instanced inside aquariumcube.
Resizing will be happen accordingly to its placeholder object
(aquariumcube), see chapter Scaling.
aquariumcube, see chapter Scaling.
8. Text in XR (tagging,linking to spatial objects)
We still think and speak in simple text, not in HTML or RDF.
It would be funny when people would shout <h1>FIRE!</h1> in case of
emergency.
Given the myriad of new (non-keyboard) XR interfaces, keeping text as
The most advanced human will probably not shout <h1>FIRE!</h1> in
case of emergency.
Given the new dawn of (non-keyboard) XR interfaces, keeping text as
is (not obscuring with markup) is preferred.
Ideally metadata must come *later with* text, but not *obfuscate* the
text, or *in another* file.
| Humans first, machines (AI) later.
| Humans first, machines (AI) later (core principle (#core-
| principle)
This way:
van Kammen Expires 8 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
1. XR Fragments allows <b id="tagging-text">hasslefree XR text
tagging</b>, using BibTeX metadata *at the end of content* (like
visual-meta (https://visual.meta.info)).
2. XR Fragments allows hasslefree <a href="#textual-tag">textual
tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a
href="#supra-tagging">supra tagging</a>, by mapping 3D/text
object (class)names to BibTeX
van Kammen Expires 7 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
3. inline BibTeX is the minimum required *requestless metadata*-
layer for XR text, RDF/JSON is great but optional (and too
verbose for the spec-usecases).
object (class)names using BibTeX 'tags'
3. inline BibTeX 'tags' are the minimum required *requestless
metadata*-layer for XR text, RDF/JSON is great (but fits better
in the application-layer)
4. Default font (unless specified otherwise) is a modern monospace
font, for maximized tabular expressiveness (see the core
principle (#core-principle)).
@ -352,48 +359,109 @@ Internet-Draft XR Fragments September 2023
metadata like RDF (see the core principle (#core-principle))
This allows recursive connections between text itself, as well as 3D
objects and vice versa, using *BiBTeX-tags* :
objects and vice versa, using *BibTeX-tags* :
+--------------------------------------------------+
| My Notes |
| |
| The houses seen here are built in baroque style. |
| |
| @house{houses, <----- XR Fragment triple/tag: tiny & phrase-matching BiBTeX
| @house{houses, <----- XR Fragment triple/tag: phrase-matching BibTeX
| url = {#.house} <------------------- XR Fragment URI
| } |
+--------------------------------------------------+
This sets up the following associations in the scene:
1. <b id="textual-tagging">textual tag</b>: text or spatial-
occurences named 'houses' is now automatically tagged with
'house'
2. <b id="spatial-tagging">spatial tag</b>: spatial object(s) with
class:house (#.house) is now automatically tagged with 'house'
3. <b id="supra-tagging">supra-tag</b>: text- or spatial-object
named 'house' (spatially) elsewhere, is now automatically tagged
with 'house'
Spatial wires can be rendered, words can be highlighted, spatial
objects can be highlighted, links can be manipulated by the user.
| The simplicity of appending BibTeX (humans first, machines later)
| is demonstrated by visual-meta (https://visual-meta.info) in
| greater detail, and makes it perfect for GUI's to generate
| (bib)text later. Humans can still view/edit the metadata
| manually, by clicking 'toggle metadata' on the 'back' (contextmenu
| e.g.) of any XR text, anywhere anytime.
This allows instant realtime tagging of objects at various scopes:
van Kammen Expires 7 March 2024 [Page 7]
van Kammen Expires 8 March 2024 [Page 7]
Internet-Draft XR Fragments September 2023
+====================================+=============================+
| scope | matching algo |
+====================================+=============================+
| <b id="textual- | text containing 'houses' is |
| tagging">textual</b> | now automatically tagged |
| | with 'house' (incl. |
| | plaintext src child nodes) |
+------------------------------------+-----------------------------+
| <b id="spatial- | spatial object(s) with |
| tagging">spatial</b> | "class":"house" (because of |
| | {#.house}) are now |
| | automatically tagged with |
| | 'house' (incl. child nodes) |
+------------------------------------+-----------------------------+
| <b id="supra-tagging">supra</b> | text- or spatial-object(s) |
| | (non-descendant nodes) |
| | elsewhere, named 'house', |
| | are automatically tagged |
| | with 'house' (current node |
| | to root node) |
+------------------------------------+-----------------------------+
| <b id="omni-tagging">omni</b> | text- or spatial-object(s) |
| | (non-descendant nodes) |
| | elsewhere, containing |
| | class/name 'house', are |
| | automatically tagged with |
| | 'house' (too node to all |
| | nodes) |
+------------------------------------+-----------------------------+
| <b id="infinite- | text- or spatial-object(s) |
| tagging">infinite</b> | (non-descendant nodes) |
| | elsewhere, containing |
| | class/name 'house' or |
| | 'houses', are automatically |
| | tagged with 'house' (too |
| | node to all nodes) |
+------------------------------------+-----------------------------+
Table 4
This empowers the enduser spatial expressiveness (see the core
principle (#core-principle)): spatial wires can be rendered, words
can be highlighted, spatial objects can be highlighted/moved/scaled,
links can be manipulated by the user.
The simplicity of appending BibTeX 'tags' (humans first, machines
later) is also demonstrated by visual-meta (https://visual-meta.info)
in greater detail.
van Kammen Expires 8 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023
1. The XR Browser needs to offer a global setting/control to adjust
tag-scope with at least range: [text, spatial, text+spatial,
supra, omni, infinite]
2. The XR Browser should always allow the human to view/edit the
BibTex metadata manually, by clicking 'toggle metadata' on the
'back' (contextmenu e.g.) of any XR text, anywhere anytime.
| NOTE: infinite matches both 'house' and 'houses' in text, as well
| as spatial objects with "class":"house" or name "house". This
| multiplexing of id/category is deliberate because of the core
| principle (#core-principle).
8.1. Default Data URI mimetype
The src-values work as expected (respecting mime-types), however:
@ -427,6 +495,17 @@ Internet-Draft XR Fragments September 2023
* less network requests, therefore less webservices, therefore less
servers, and overall better FPS in XR
van Kammen Expires 8 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023
| This significantly expands expressiveness and portability of human
| text, by *postponing machine-concerns to the end of the human
| text* in contrast to literal interweaving of content and
@ -434,22 +513,13 @@ Internet-Draft XR Fragments September 2023
For all other purposes, regular mimetypes can be used (but are not
required by the spec).
To keep XR Fragments a lightweight spec, BiBTeX is used for text-
To keep XR Fragments a lightweight spec, BibTeX is used for text-
spatial object mappings (not a scripting language or RDF e.g.).
| Applications are also free to attach any JSON(LD / RDF) to spatial
| objects using custom properties (but is not interpreted by this
| spec).
van Kammen Expires 7 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023
8.2. URL and Data URI
+--------------------------------------------------------------+ +------------------------+
@ -477,6 +547,21 @@ Internet-Draft XR Fragments September 2023
Example:
van Kammen Expires 8 March 2024 [Page 10]
Internet-Draft XR Fragments September 2023
+------------------------------------------------------------------------------------+
| |
| index.gltf |
@ -498,101 +583,113 @@ Internet-Draft XR Fragments September 2023
1. When the user surfs to https://.../index.gltf#AI the XR
Fragments-parser points the enduser to the AI object, and can
show contextual info about it.
van Kammen Expires 7 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023
2. When (partial) remote content is embedded thru XR Fragment
queries (see XR Fragment queries), its related visual-meta can be
embedded along.
8.3. BibTeX as lowest common denominator for tagging/triple
8.3. BibTeX as lowest common denominator for tagging/triples
The everything-is-text focus of BiBTex is a great advantage for
introspection, and perhaps a necessary bridge towards RDF
(extrospective). BibTeX-appendices (visual-meta e.g.) are already
adopted in the physical world (academic books), perhaps due to its
terseness & simplicity:
| "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix"
Unlike XML or JSON, the typeless, unnested, everything-is-text nature
of BibTeX tags is a great advantage for introspection.
In a way, the RDF project should welcome it as a missing sensemaking
precursor to (eventual) extrospective RDF.
BibTeX-appendices are already used in the digital AND physical world
(academic books, visual-meta (https://visual-meta.info)), perhaps due
to its terseness & simplicity.
In that sense, it's one step up from the .ini fileformat (which has
never leaked into the physical book-world):
1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by
humans) of (unobtrusive) content AND metadata
2. an introspective 'sketchpad' for metadata, which can (optionally)
mature into RDF later
+====================+==========================+=================+
| characteristic | Plain Text (with BibTeX) | RDF |
+====================+==========================+=================+
| perspective | introspective | extrospective |
+--------------------+--------------------------+-----------------+
| space/scope | local | world |
+--------------------+--------------------------+-----------------+
| everything is text | yes | no |
| (string) | | |
+--------------------+--------------------------+-----------------+
| leaves (dictated) | yes | no |
| text intact | | |
+--------------------+--------------------------+-----------------+
| markup language(s) | no (appendix) | ~4 different |
+--------------------+--------------------------+-----------------+
| polyglot format | no | yes |
+--------------------+--------------------------+-----------------+
| easy to copy/paste | yes | depends |
| content+metadata | | |
+--------------------+--------------------------+-----------------+
| easy to write/ | yes | depends |
| repair | | |
+--------------------+--------------------------+-----------------+
| easy to parse | yes (fits on A4 paper) | depends |
+--------------------+--------------------------+-----------------+
| infrastructure | selfcontained (plain | (semi)networked |
| storage | text) | |
+--------------------+--------------------------+-----------------+
| tagging | yes | yes |
+--------------------+--------------------------+-----------------+
| freeform tagging/ | yes | depends |
van Kammen Expires 7 March 2024 [Page 10]
van Kammen Expires 8 March 2024 [Page 11]
Internet-Draft XR Fragments September 2023
| notes | | |
+--------------------+--------------------------+-----------------+
| specialized file- | no | yes |
| type | | |
+--------------------+--------------------------+-----------------+
| copy-paste | yes | depends |
| preserves metadata | | |
+--------------------+--------------------------+-----------------+
| emoji | yes | depends |
+--------------------+--------------------------+-----------------+
| predicates | free | pre-determined |
+--------------------+--------------------------+-----------------+
| implementation/ | no | depends |
| network overhead | | |
+--------------------+--------------------------+-----------------+
| used in (physical) | yes (visual-meta) | no |
| books/PDF | | |
+--------------------+--------------------------+-----------------+
| terse categoryless | yes | no |
| predicates | | |
+--------------------+--------------------------+-----------------+
| nested structures | no | yes |
+--------------------+--------------------------+-----------------+
+====================+=================+=================+
| characteristic | UTF8 Plain Text | RDF |
| | (with BibTeX) | |
+====================+=================+=================+
| perspective | introspective | extrospective |
+--------------------+-----------------+-----------------+
| structure | fuzzy | precise |
| | (sensemaking) | |
+--------------------+-----------------+-----------------+
| space/scope | local | world |
+--------------------+-----------------+-----------------+
| everything is text | yes | no |
| (string) | | |
+--------------------+-----------------+-----------------+
| leaves (dictated) | yes | no |
| text intact | | |
+--------------------+-----------------+-----------------+
| markup language | just an | ~4 different |
| | appendix | |
+--------------------+-----------------+-----------------+
| polyglot format | no | yes |
+--------------------+-----------------+-----------------+
| easy to copy/paste | yes | up to |
| content+metadata | | application |
+--------------------+-----------------+-----------------+
| easy to write/ | yes | depends |
| repair for layman | | |
+--------------------+-----------------+-----------------+
| easy to | yes (fits on A4 | depends |
| (de)serialize | paper) | |
+--------------------+-----------------+-----------------+
| infrastructure | selfcontained | (semi)networked |
| | (plain text) | |
+--------------------+-----------------+-----------------+
| freeform tagging/ | yes, terse | yes, verbose |
| annotation | | |
+--------------------+-----------------+-----------------+
| can be appended to | yes | up to |
| text-content | | application |
+--------------------+-----------------+-----------------+
| copy-paste text | yes | up to |
| preserves metadata | | application |
+--------------------+-----------------+-----------------+
| emoji | yes | depends on |
| | | encoding |
+--------------------+-----------------+-----------------+
| predicates | free | semi pre- |
| | | determined |
Table 4
| To serve humans first, human 'fuzzy symbolical mind' comes first,
| and 'categorized typesafe RDF hive mind'
| (https://en.wikipedia.org/wiki/Borg)) later.
8.4. XR text (BibTeX) example parser
van Kammen Expires 8 March 2024 [Page 12]
Internet-Draft XR Fragments September 2023
+--------------------+-----------------+-----------------+
| implementation/ | no | depends |
| network overhead | | |
+--------------------+-----------------+-----------------+
| used in (physical) | yes (visual- | no |
| books/PDF | meta) | |
+--------------------+-----------------+-----------------+
| terse non-verb | yes | no |
| predicates | | |
+--------------------+-----------------+-----------------+
| nested structures | no | yes |
+--------------------+-----------------+-----------------+
Table 5
8.4. XR Text (w. BibTeX) example parser
Here's a naive XR Text (de)multiplexer in javascript (which also
supports visual-meta start/end-blocks):
@ -610,14 +707,6 @@ xrtext = {
data=''
}else data += `${line}\n`
}
van Kammen Expires 7 March 2024 [Page 11]
Internet-Draft XR Fragments September 2023
text += data ? '' : `${line}\n`
last=line
})
@ -630,9 +719,17 @@ Internet-Draft XR Fragments September 2023
.map( s => s.trim() ).join("\n") // be nice
.replace( /}@/, "}\n@" ) // to authors
.replace( /},}/, "},\n}" ) // which struggle
.replace( /^}/, "\n}" ) // with writing single-line BiBTeX
.replace( /^}/, "\n}" ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c => c.trim() ) // actual processing:
van Kammen Expires 8 March 2024 [Page 13]
Internet-Draft XR Fragments September 2023
.map( (s) => {
if( s.match(/(^}|-end})/) && st.length > 1 ) st.shift()
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} )
@ -665,15 +762,6 @@ var {meta,text} = xrtext.decode.text(str) // demultiplex text & bibtex
meta['@foo{'] = { "note":"note from the user"} // edit metadata
xrtext.encode(text,meta) // multiplex text & bibtex back together
van Kammen Expires 7 March 2024 [Page 12]
Internet-Draft XR Fragments September 2023
| above can be used as a startingpoint for LLVM's to translate/
| steelman to any language.
@ -685,9 +773,18 @@ Internet-Draft XR Fragments September 2023
an XR Fragment-compatible browser can copy/paste/share data in these
ways:
* time/space: 3D object (current animation-loop)
* text: TeXt object (including BiBTeX/visual-meta if any)
* interlinked: Collected objects by visual-meta tag
1. time/space: 3D object (current animation-loop)
2. text: TeXt object (including BibTeX/visual-meta if any)
3. interlinked: Collected objects by visual-meta tag
van Kammen Expires 8 March 2024 [Page 14]
Internet-Draft XR Fragments September 2023
10. XR Fragment queries
@ -703,48 +800,64 @@ Internet-Draft XR Fragments September 2023
It's simple but powerful syntax which allows <b>css</b>-like class/
id-selectors with a searchengine prompt-style feeling:
1. queries are only executed when <b>embedded</b> in the asset/scene
(thru src). This is to prevent sharing of scene-tampered URL's.
2. search words are matched against 3D object names or metadata-
key(values)
3. # equals #q=*
4. words starting with . (.language) indicate class-properties
1. queries are showing/hiding objects *only* when defined as src
value (prevents sharing of scene-tampered URL's).
2. queries are highlighting objects when defined in the top-Level
(browser) URL (bar).
3. search words like cube and foo in #q=cube foo are matched against
3D object names or custom metadata-key(values)
4. search words like cube and foo in #q=cube foo are matched against
tags (BibTeX) inside plaintext src values like @cube{redcube, ...
e.g.
5. # equals #q=*
6. words starting with . like .german match class-metadata of 3D
objects like "class":"german"
7. words starting with . like .german match class-metadata of
(BibTeX) tags in XR Text objects like @german{KarlHeinz, ... e.g.
| *(*For example**: #q=.foo is a shorthand for #q=class:foo, which
| will select objects with custom property class:foo. Just a simple
| *For example*: #q=.foo is a shorthand for #q=class:foo, which will
| select objects with custom property class:foo. Just a simple
| #q=cube will simply select an object named cube.
* see an example video here
(https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
10.1. including/excluding
+==========+=================================================+
| operator | info |
+==========+=================================================+
| * | select all objects (only useful in src custom |
| | property) |
+----------+-------------------------------------------------+
| - | removes/hides object(s) |
+----------+-------------------------------------------------+
| : | indicates an object-embedded custom property |
| | key/value |
van Kammen Expires 7 March 2024 [Page 13]
van Kammen Expires 8 March 2024 [Page 15]
Internet-Draft XR Fragments September 2023
10.1. including/excluding
+----------+-------------------------------------------------+
| . | alias for "class" :".foo" equals class:foo |
+----------+-------------------------------------------------+
| > < | compare float or int number |
+----------+-------------------------------------------------+
| / | reference to root-scene. |
| | Useful in case of (preventing) showing/hiding |
| | objects in nested scenes (instanced by src) (*) |
+----------+-------------------------------------------------+
|''operator'' | ''info'' | |* | select all objects (only allowed in
src custom property) in the <b>current</b> scene (<b>after</b> the
default [[predefined_view|predefined_view]] # was executed)| |- |
removes/hides object(s) | |: | indicates an object-embedded custom
property key/value | |. | alias for class: (.foo equals
class:foo | |> <| compare float or int number| |/ | reference to
root-scene.
Useful in case of (preventing) showing/hiding objects in nested
scenes (instanced by [[src]])
#q=-/cube hides object cube only in the root-scene (not nested cube
objects)
#q=-cube hides both object cube in the root-scene <b>AND</b> nested
skybox objects |
Table 6
| * = #q=-/cube hides object cube only in the root-scene (not nested
| cube objects)
| #q=-cube hides both object cube in the root-scene <b>AND</b>
| nested skybox objects |
&#187; example implementation
(https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/
@ -776,16 +889,18 @@ Internet-Draft XR Fragments September 2023
13. and we set root to true or false (true=/ root selector is
present)
14. we convert key '/foo' into 'foo'
15. finally we add the key/value to the store (store.foo =
{id:false,root:true} e.g.)
van Kammen Expires 7 March 2024 [Page 14]
van Kammen Expires 8 March 2024 [Page 16]
Internet-Draft XR Fragments September 2023
15. finally we add the key/value to the store like store.foo =
{id:false,root:true} e.g.
| An example query-parser (which compiles to many languages) can be
| found here
| (https://github.com/coderofsalvation/xrfragment/blob/main/src/
@ -807,7 +922,7 @@ Internet-Draft XR Fragments September 2023
| pos=1,2,3&rot=0,90,0&q=.foo | combinators |
+-----------------------------+---------------------------------+
Table 5
Table 7
11. Security Considerations
@ -834,7 +949,4 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 7 March 2024 [Page 15]
van Kammen Expires 8 March 2024 [Page 17]

View file

@ -14,8 +14,9 @@
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br />
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> and <eref target="https://visual-meta.info">visual-meta</eref>.<br />
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> and BibTeX notation.<br />
</t>
<t>Almost every idea in this document is demonstrated at <eref target="https://xrfragment.org">https://xrfragment.org</eref></t>
</abstract>
</front>
@ -27,18 +28,28 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br />
However, thru the lens of authoring their lowest common denominator is still: plain text.<br />
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br />
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:<br />
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br />
</t>
<ol spacing="compact">
<li>addressibility and navigation of 3D scenes/objects: <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using BiBTeX (<eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
<li>hasslefree tagging across text and spatial objects using <eref target="https://en.wikipedia.org/wiki/BibTeX">BibTeX</eref> 'tags' as appendix (see <eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
</ol>
<blockquote><t>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</t>
</blockquote></section>
<section anchor="core-principle"><name>Core principle</name>
<t>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br />
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br />
</t>
<blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t>
</blockquote><t>Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater <eref target="https://en.wikipedia.org/wiki/Borg">'categorized typesafe RDF hive mind'</eref>).</t>
<blockquote><t>Humans first, machines (AI) later.</t>
</blockquote></section>
<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>
<table>
<thead>
@ -71,7 +82,7 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints (<tt>#pos=0,0,0&amp;t=1,100</tt> e.g.)</td>
<td>URI Fragment with spatial hints like <tt>#pos=0,0,0&amp;t=1,100</tt> e.g.</td>
</tr>
<tr>
@ -86,17 +97,17 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene (<tt>#q=cube</tt>)</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <tt>#q=cube</tt></td>
</tr>
<tr>
<td>visual-meta</td>
<td><eref target="https://visual.meta.info">visual-meta</eref> data appended to text which is indirectly visible/editable in XR.</td>
<td><eref target="https://visual.meta.info">visual-meta</eref> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>opposite of networked metadata (RDF/HTML request-fanouts easily cause framerate-dropping, hence not used a lot in games).</td>
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
</tr>
<tr>
@ -121,14 +132,6 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
</tbody>
</table></section>
<section anchor="core-principle"><name>Core principle</name>
<t>XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-to-thought feedback loops.<br />
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br />
</t>
<blockquote><t>&quot;When a car breaks down, the ones without turbosupercharger are easier to fix&quot;</t>
</blockquote></section>
<section anchor="list-of-uri-fragments"><name>List of URI Fragments</name>
<table>
<thead>
@ -235,11 +238,11 @@ This also means that the repair-ability of machine-matters should be human frien
<t>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <tt>buttonA</tt> and <tt>buttonB</tt>.<br />
In case of <tt>buttonA</tt> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <tt>buttonB</tt> will
<strong>replace the current scene</strong> with a new one (<tt>other.fbx</tt>).</t>
<strong>replace the current scene</strong> with a new one, like <tt>other.fbx</tt>.</t>
</section>
<section anchor="embedding-3d-content"><name>Embedding 3D content</name>
<t>Here's an ascii representation of a 3D scene-graph with 3D objects (<tt></tt>) which embeds remote &amp; local 3D objects (<tt></tt>) (without) using queries:</t>
<t>Here's an ascii representation of a 3D scene-graph with 3D objects <tt></tt> which embeds remote &amp; local 3D objects <tt></tt> (without) using queries:</t>
<artwork> +--------------------------------------------------------+ +-------------------------+
| | | |
@ -263,51 +266,86 @@ In case of <tt>buttonA</tt> the end-user will be teleported to another location
Also, after lazy-loading <tt>ocean.com/aquarium.gltf</tt>, only the queried objects <tt>bass</tt> and <tt>tuna</tt> will be instanced inside <tt>aquariumcube</tt>.<br />
Resizing will be happen accordingly to its placeholder object (<tt>aquariumcube</tt>), see chapter Scaling.<br />
Resizing will be happen accordingly to its placeholder object <tt>aquariumcube</tt>, see chapter Scaling.<br />
</t>
</section>
<section anchor="text-in-xr-tagging-linking-to-spatial-objects"><name>Text in XR (tagging,linking to spatial objects)</name>
<t>We still think and speak in simple text, not in HTML or RDF.<br />
It would be funny when people would shout <tt>&lt;h1&gt;FIRE!&lt;/h1&gt;</tt> in case of emergency.<br />
The most advanced human will probably not shout <tt>&lt;h1&gt;FIRE!&lt;/h1&gt;</tt> in case of emergency.<br />
Given the myriad of new (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br />
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br />
Ideally metadata must come <strong>later with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>in another</strong> file.<br />
</t>
<blockquote><t>Humans first, machines (AI) later.</t>
<blockquote><t>Humans first, machines (AI) later (<eref target="#core-principle">core principle</eref></t>
</blockquote><t>This way:</t>
<ol spacing="compact">
<li>XR Fragments allows &lt;b id=&quot;tagging-text&quot;&gt;hasslefree XR text tagging&lt;/b&gt;, using BibTeX metadata <strong>at the end of content</strong> (like <eref target="https://visual.meta.info">visual-meta</eref>).</li>
<li>XR Fragments allows hasslefree &lt;a href=&quot;#textual-tag&quot;&gt;textual tagging&lt;/a&gt;, &lt;a href=&quot;#spatial-tag&quot;&gt;spatial tagging&lt;/a&gt;, and &lt;a href=&quot;#supra-tagging&quot;&gt;supra tagging&lt;/a&gt;, by mapping 3D/text object (class)names to BibTeX</li>
<li>inline BibTeX is the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great but optional (and too verbose for the spec-usecases).</li>
<li>XR Fragments allows hasslefree &lt;a href=&quot;#textual-tag&quot;&gt;textual tagging&lt;/a&gt;, &lt;a href=&quot;#spatial-tag&quot;&gt;spatial tagging&lt;/a&gt;, and &lt;a href=&quot;#supra-tagging&quot;&gt;supra tagging&lt;/a&gt;, by mapping 3D/text object (class)names using BibTeX 'tags'</li>
<li>inline BibTeX 'tags' are the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great (but fits better in the application-layer)</li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <eref target="#core-principle">the core principle</eref>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <eref target="#core-principle">the core principle</eref>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <eref target="#core-principle">the core principle</eref>)</li>
</ol>
<t>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BiBTeX-tags</strong> :</t>
<t>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BibTeX-tags</strong> :</t>
<artwork> +--------------------------------------------------+
| My Notes |
| |
| The houses seen here are built in baroque style. |
| |
| @house{houses, &lt;----- XR Fragment triple/tag: tiny &amp; phrase-matching BiBTeX
| @house{houses, &lt;----- XR Fragment triple/tag: phrase-matching BibTeX
| url = {#.house} &lt;------------------- XR Fragment URI
| } |
+--------------------------------------------------+
</artwork>
<t>This sets up the following associations in the scene:</t>
<t>This allows instant realtime tagging of objects at various scopes:</t>
<table>
<thead>
<tr>
<th>scope</th>
<th>matching algo</th>
</tr>
</thead>
<tbody>
<tr>
<td>&lt;b id=&quot;textual-tagging&quot;&gt;textual&lt;/b&gt;</td>
<td>text containing 'houses' is now automatically tagged with 'house' (incl. plaintext <tt>src</tt> child nodes)</td>
</tr>
<tr>
<td>&lt;b id=&quot;spatial-tagging&quot;&gt;spatial&lt;/b&gt;</td>
<td>spatial object(s) with <tt>&quot;class&quot;:&quot;house&quot;</tt> (because of <tt>{#.house}</tt>) are now automatically tagged with 'house' (incl. child nodes)</td>
</tr>
<tr>
<td>&lt;b id=&quot;supra-tagging&quot;&gt;supra&lt;/b&gt;</td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, named 'house', are automatically tagged with 'house' (current node to root node)</td>
</tr>
<tr>
<td>&lt;b id=&quot;omni-tagging&quot;&gt;omni&lt;/b&gt;</td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes)</td>
</tr>
<tr>
<td>&lt;b id=&quot;infinite-tagging&quot;&gt;infinite&lt;/b&gt;</td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes)</td>
</tr>
</tbody>
</table><t>This empowers the enduser spatial expressiveness (see <eref target="#core-principle">the core principle</eref>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br />
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by <eref target="https://visual-meta.info">visual-meta</eref> in greater detail.</t>
<ol spacing="compact">
<li>&lt;b id=&quot;textual-tagging&quot;&gt;textual tag&lt;/b&gt;: text or spatial-occurences named 'houses' is now automatically tagged with 'house'</li>
<li>&lt;b id=&quot;spatial-tagging&quot;&gt;spatial tag&lt;/b&gt;: spatial object(s) with class:house (#.house) is now automatically tagged with 'house'</li>
<li>&lt;b id=&quot;supra-tagging&quot;&gt;supra-tag&lt;/b&gt;: text- or spatial-object named 'house' (spatially) elsewhere, is now automatically tagged with 'house'</li>
<li>The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: <tt>[text, spatial, text+spatial, supra, omni, infinite]</tt></li>
<li>The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li>
</ol>
<t>Spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted, links can be manipulated by the user.</t>
<blockquote><t>The simplicity of appending BibTeX (humans first, machines later) is demonstrated by <eref target="https://visual-meta.info">visual-meta</eref> in greater detail, and makes it perfect for GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</t>
<blockquote><t>NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with <tt>&quot;class&quot;:&quot;house&quot;</tt> or name &quot;house&quot;. This multiplexing of id/category is deliberate because of <eref target="#core-principle">the core principle</eref>.</t>
</blockquote>
<section anchor="default-data-uri-mimetype"><name>Default Data URI mimetype</name>
<t>The <tt>src</tt>-values work as expected (respecting mime-types), however:</t>
@ -333,7 +371,7 @@ Its implications are that local/remote responses can now:</t>
<blockquote><t>This significantly expands expressiveness and portability of human text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t>
</blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br />
To keep XR Fragments a lightweight spec, BiBTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</t>
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</t>
<blockquote><t>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</t>
</blockquote></section>
@ -382,9 +420,15 @@ This allows rich interaction and interlinking between text and 3D objects:</t>
</ol>
</section>
<section anchor="bibtex-as-lowest-common-denominator-for-tagging-triple"><name>BibTeX as lowest common denominator for tagging/triple</name>
<t>The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective).
BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness &amp; simplicity:</t>
<section anchor="bibtex-as-lowest-common-denominator-for-tagging-triples"><name>BibTeX as lowest common denominator for tagging/triples</name>
<blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t>
</blockquote><t>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br />
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br />
BibTeX-appendices are already used in the digital AND physical world (academic books, <eref target="https://visual-meta.info">visual-meta</eref>), perhaps due to its terseness &amp; simplicity.<br />
In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has never leaked into the physical book-world):</t>
<ol spacing="compact">
<li>&lt;b id=&quot;frictionless-copy-paste&quot;&gt;frictionless copy/pasting&lt;/b&gt; (by humans) of (unobtrusive) content AND metadata</li>
@ -394,7 +438,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<thead>
<tr>
<th>characteristic</th>
<th>Plain Text (with BibTeX)</th>
<th>UTF8 Plain Text (with BibTeX)</th>
<th>RDF</th>
</tr>
</thead>
@ -406,6 +450,12 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<td>extrospective</td>
</tr>
<tr>
<td>structure</td>
<td>fuzzy (sensemaking)</td>
<td>precise</td>
</tr>
<tr>
<td>space/scope</td>
<td>local</td>
@ -425,8 +475,8 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
</tr>
<tr>
<td>markup language(s)</td>
<td>no (appendix)</td>
<td>markup language</td>
<td>just an appendix</td>
<td>~4 different</td>
</tr>
@ -439,61 +489,55 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<tr>
<td>easy to copy/paste content+metadata</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>easy to write/repair</td>
<td>easy to write/repair for layman</td>
<td>yes</td>
<td>depends</td>
</tr>
<tr>
<td>easy to parse</td>
<td>easy to (de)serialize</td>
<td>yes (fits on A4 paper)</td>
<td>depends</td>
</tr>
<tr>
<td>infrastructure storage</td>
<td>infrastructure</td>
<td>selfcontained (plain text)</td>
<td>(semi)networked</td>
</tr>
<tr>
<td>tagging</td>
<td>yes</td>
<td>yes</td>
<td>freeform tagging/annotation</td>
<td>yes, terse</td>
<td>yes, verbose</td>
</tr>
<tr>
<td>freeform tagging/notes</td>
<td>can be appended to text-content</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>specialized file-type</td>
<td>no</td>
<td>copy-paste text preserves metadata</td>
<td>yes</td>
</tr>
<tr>
<td>copy-paste preserves metadata</td>
<td>yes</td>
<td>depends</td>
<td>up to application</td>
</tr>
<tr>
<td>emoji</td>
<td>yes</td>
<td>depends</td>
<td>depends on encoding</td>
</tr>
<tr>
<td>predicates</td>
<td>free</td>
<td>pre-determined</td>
<td>semi pre-determined</td>
</tr>
<tr>
@ -509,7 +553,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
</tr>
<tr>
<td>terse categoryless predicates</td>
<td>terse non-verb predicates</td>
<td>yes</td>
<td>no</td>
</tr>
@ -520,10 +564,9 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
<td>yes</td>
</tr>
</tbody>
</table><blockquote><t>To serve humans first, human 'fuzzy symbolical mind' comes first, and <eref target="https://en.wikipedia.org/wiki/Borg">'categorized typesafe RDF hive mind'</eref>) later.</t>
</blockquote></section>
</table></section>
<section anchor="xr-text-bibtex-example-parser"><name>XR text (BibTeX) example parser</name>
<section anchor="xr-text-w-bibtex-example-parser"><name>XR Text (w. BibTeX) example parser</name>
<t>Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):</t>
<artwork>xrtext = {
@ -551,7 +594,7 @@ BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (
.map( s =&gt; s.trim() ).join(&quot;\n&quot;) // be nice
.replace( /}@/, &quot;}\n@&quot; ) // to authors
.replace( /},}/, &quot;},\n}&quot; ) // which struggle
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BiBTeX
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c =&gt; c.trim() ) // actual processing:
.map( (s) =&gt; {
@ -595,11 +638,11 @@ xrtext.encode(text,meta) // multiplex text &amp; bibte
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</t>
<ul spacing="compact">
<ol spacing="compact">
<li>time/space: 3D object (current animation-loop)</li>
<li>text: TeXt object (including BiBTeX/visual-meta if any)</li>
<li>text: TeXt object (including BibTeX/visual-meta if any)</li>
<li>interlinked: Collected objects by visual-meta tag</li>
</ul>
</ol>
</section>
<section anchor="xr-fragment-queries"><name>XR Fragment queries</name>
@ -616,29 +659,64 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
<t>It's simple but powerful syntax which allows &lt;b&gt;css&lt;/b&gt;-like class/id-selectors with a searchengine prompt-style feeling:</t>
<ol spacing="compact">
<li>queries are only executed when &lt;b&gt;embedded&lt;/b&gt; in the asset/scene (thru <tt>src</tt>). This is to prevent sharing of scene-tampered URL's.</li>
<li>search words are matched against 3D object names or metadata-key(values)</li>
<li>queries are showing/hiding objects <strong>only</strong> when defined as <tt>src</tt> value (prevents sharing of scene-tampered URL's).</li>
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against 3D object names or custom metadata-key(values)</li>
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against tags (BibTeX) inside plaintext <tt>src</tt> values like <tt>@cube{redcube, ...</tt> e.g.</li>
<li><tt>#</tt> equals <tt>#q=*</tt></li>
<li>words starting with <tt>.</tt> (<tt>.language</tt>) indicate class-properties</li>
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of 3D objects like <tt>&quot;class&quot;:&quot;german&quot;</tt></li>
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of (BibTeX) tags in XR Text objects like <tt>@german{KarlHeinz, ...</tt> e.g.</li>
</ol>
<blockquote><t>*(*For example**: <tt>#q=.foo</tt> is a shorthand for <tt>#q=class:foo</tt>, which will select objects with custom property <tt>class</tt>:<tt>foo</tt>. Just a simple <tt>#q=cube</tt> will simply select an object named <tt>cube</tt>.</t>
<blockquote><t><strong>For example</strong>: <tt>#q=.foo</tt> is a shorthand for <tt>#q=class:foo</tt>, which will select objects with custom property <tt>class</tt>:<tt>foo</tt>. Just a simple <tt>#q=cube</tt> will simply select an object named <tt>cube</tt>.</t>
</blockquote>
<ul spacing="compact">
<li>see <eref target="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an example video here</eref></li>
</ul>
<section anchor="including-excluding"><name>including/excluding</name>
<t>|''operator'' | ''info'' |
|<tt>*</tt> | select all objects (only allowed in <tt>src</tt> custom property) in the &lt;b&gt;current&lt;/b&gt; scene (&lt;b&gt;after&lt;/b&gt; the default [[predefined_view|predefined_view]] <tt>#</tt> was executed)|
|<tt>-</tt> | removes/hides object(s) |
|<tt>:</tt> | indicates an object-embedded custom property key/value |
|<tt>.</tt> | alias for <tt>class:</tt> (<tt>.foo</tt> equals <tt>class:foo</tt> |
|<tt>&gt;</tt> <tt>&lt;</tt>| compare float or int number|
|<tt>/</tt> | reference to root-scene.<br />
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br />
<tt>#q=-/cube</tt> hides object <tt>cube</tt> only in the root-scene (not nested <tt>cube</tt> objects)<br />
<table>
<thead>
<tr>
<th>operator</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><tt>*</tt></td>
<td>select all objects (only useful in <tt>src</tt> custom property)</td>
</tr>
<tr>
<td><tt>-</tt></td>
<td>removes/hides object(s)</td>
</tr>
<tr>
<td><tt>:</tt></td>
<td>indicates an object-embedded custom property key/value</td>
</tr>
<tr>
<td><tt>.</tt></td>
<td>alias for <tt>&quot;class&quot; :&quot;.foo&quot;</tt> equals <tt>class:foo</tt></td>
</tr>
<tr>
<td><tt>&gt;</tt> <tt>&lt;</tt></td>
<td>compare float or int number</td>
</tr>
<tr>
<td><tt>/</tt></td>
<td>reference to root-scene.<br />
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <tt>src</tt>) (*)</td>
</tr>
</tbody>
</table><blockquote><t>* = <tt>#q=-/cube</tt> hides object <tt>cube</tt> only in the root-scene (not nested <tt>cube</tt> objects)<br />
<tt>#q=-cube</tt> hides both object <tt>cube</tt> in the root-scene &lt;b&gt;AND&lt;/b&gt; nested <tt>skybox</tt> objects |</t>
<t><eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</eref>
</blockquote><t><eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</eref>
<eref target="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</eref>
<eref target="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</eref></t>
</section>
@ -661,7 +739,7 @@ Useful in case of (preventing) showing/hiding objects in nested scenes (instance
<li>therefore we we set <tt>id</tt> to <tt>true</tt> or <tt>false</tt> (false=excluder <tt>-</tt>)</li>
<li>and we set <tt>root</tt> to <tt>true</tt> or <tt>false</tt> (true=<tt>/</tt> root selector is present)</li>
<li>we convert key '/foo' into 'foo'</li>
<li>finally we add the key/value to the store (<tt>store.foo = {id:false,root:true}</tt> e.g.)</li>
<li>finally we add the key/value to the store like <tt>store.foo = {id:false,root:true}</tt> e.g.</li>
</ol>
<blockquote><t>An example query-parser (which compiles to many languages) can be <eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</eref></t>
</blockquote></section>