xrfragment/doc/RFC_XR_Fragments.html

967 lines
35 KiB
HTML
Raw Normal View History

2023-09-01 14:20:02 +02:00
<!DOCTYPE html>
<html>
<head>
<title>XR Fragments</title>
<meta name="GENERATOR" content="github.com/mmarkdown/mmark Mmark Markdown Processor - mmark.miek.nl">
<meta charset="utf-8">
</head>
<body>
<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<style type="text/css">
body{
font-family: monospace;
2023-09-04 21:20:59 +02:00
max-width: 1000px;
2023-09-01 14:20:02 +02:00
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
color:#555;
background:#F0F0F3
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
2023-09-02 21:44:57 +02:00
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
2023-09-04 21:20:59 +02:00
pre{
line-height: 18px;
overflow: auto;
padding: 12px;
}
pre + code {
background:#DDD;
}
2023-09-02 21:44:57 +02:00
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
2023-09-04 21:20:59 +02:00
th {
border-bottom: 1px solid #000;
text-align: left;
padding-right:45px;
padding-left:7px;
background: #DDD;
}
td {
border-bottom: 1px solid #CCC;
font-size:13px;
}
2023-09-01 14:20:02 +02:00
</style>
<br>
<h1>XR Fragments</h1>
<br>
<pre>
stream: IETF
area: Internet
status: informational
author: Leon van Kammen
date: 2023-04-12T00:00:00Z
workgroup: Internet Engineering Task Force
value: draft-XRFRAGMENTS-leonvankammen-00
</pre>
<h1 class="special" id="abstract">Abstract</h1>
2023-09-04 21:20:59 +02:00
<p>This draft offers a specification for 4D URLs &amp; navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
2023-09-06 15:43:29 +02:00
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and BibTags notation.<br></p>
2023-09-05 19:14:10 +02:00
<blockquote>
<p>Almost every idea in this document is demonstrated at <a href="https://xrfragment.org">https://xrfragment.org</a></p>
</blockquote>
2023-09-01 14:20:02 +02:00
<section data-matter="main">
<h1 id="introduction">Introduction</h1>
2023-09-04 21:20:59 +02:00
<p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?<br>
Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
2023-09-05 19:14:10 +02:00
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br>
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br></p>
2023-09-01 14:20:02 +02:00
2023-09-04 21:20:59 +02:00
<ol>
<li>addressibility and navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
2023-09-06 15:43:29 +02:00
<li>hasslefree tagging across text and spatial objects using <a href="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> as appendix (see <a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
2023-09-04 21:20:59 +02:00
</ol>
<blockquote>
<p>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</p>
</blockquote>
2023-09-01 14:20:02 +02:00
2023-09-05 19:14:10 +02:00
<h1 id="core-principle">Core principle</h1>
<p>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br></p>
<blockquote>
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote>
<p>Let&rsquo;s always focus on average humans: the &lsquo;fuzzy symbolical mind&rsquo; must be served first, before serving the greater <a href="https://en.wikipedia.org/wiki/Borg">&lsquo;categorized typesafe RDF hive mind&rsquo;</a>).</p>
<blockquote>
<p>Humans first, machines (AI) later.</p>
</blockquote>
2023-09-01 14:20:02 +02:00
<h1 id="conventions-and-definitions">Conventions and Definitions</h1>
2023-09-04 21:20:59 +02:00
<table>
<thead>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
2023-09-05 19:14:10 +02:00
<td>URI Fragment with spatial hints like <code>#pos=0,0,0&amp;t=1,100</code> e.g.</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>query</td>
2023-09-05 19:14:10 +02:00
<td>an URI Fragment-operator which queries object(s) from a scene like <code>#q=cube</code></td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>visual-meta</td>
2023-09-05 19:14:10 +02:00
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>requestless metadata</td>
2023-09-05 19:14:10 +02:00
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&ldquo;I feel this belongs to that&rdquo;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&ldquo;I&rsquo;m fairly sure John is a person who lives in oklahoma&rdquo;)</td>
</tr>
<tr>
<td><code></code></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
2023-09-06 15:13:36 +02:00
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
2023-09-06 15:43:29 +02:00
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
2023-09-04 21:20:59 +02:00
</tbody>
</table>
<h1 id="list-of-uri-fragments">List of URI Fragments</h1>
<table>
<thead>
<tr>
<th>fragment</th>
<th>type</th>
<th>example</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>#pos</code></td>
<td>vector3</td>
<td><code>#pos=0.5,0,0</code></td>
<td>positions camera to xyz-coord 0.5,0,0</td>
</tr>
<tr>
<td><code>#rot</code></td>
<td>vector3</td>
<td><code>#rot=0,90,0</code></td>
<td>rotates camera to xyz-coord 0.5,0,0</td>
</tr>
<tr>
<td><code>#t</code></td>
<td>vector2</td>
<td><code>#t=500,1000</code></td>
<td>sets animation-loop range between frame 500 and 1000</td>
</tr>
<tr>
<td><code>#......</code></td>
<td>string</td>
<td><code>#.cubes</code> <code>#cube</code></td>
<td>object(s) of interest (fragment to object name or class mapping)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>xyz coordinates are similar to ones found in SVG Media Fragments</p>
</blockquote>
<h1 id="list-of-metadata-for-3d-nodes">List of metadata for 3D nodes</h1>
<table>
<thead>
<tr>
<th>key</th>
<th>type</th>
<th>example (JSON)</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>name</code></td>
<td>string</td>
<td><code>&quot;name&quot;: &quot;cube&quot;</code></td>
<td>available in all 3D fileformats &amp; scenes</td>
</tr>
<tr>
<td><code>class</code></td>
<td>string</td>
<td><code>&quot;class&quot;: &quot;cubes&quot;</code></td>
<td>available through custom property in 3D fileformats</td>
</tr>
<tr>
<td><code>href</code></td>
<td>string</td>
<td><code>&quot;href&quot;: &quot;b.gltf&quot;</code></td>
<td>available through custom property in 3D fileformats</td>
</tr>
<tr>
<td><code>src</code></td>
<td>string</td>
<td><code>&quot;src&quot;: &quot;#q=cube&quot;</code></td>
<td>available through custom property in 3D fileformats</td>
</tr>
</tbody>
</table>
<p>Popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREEjs), <code>COLLADA</code> and so on.</p>
2023-09-01 14:20:02 +02:00
2023-09-04 21:20:59 +02:00
<blockquote>
<p>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</p>
</blockquote>
2023-09-01 14:20:02 +02:00
<h1 id="navigating-3d">Navigating 3D</h1>
2023-09-04 21:20:59 +02:00
<p>Here&rsquo;s an ascii representation of a 3D scene-graph which contains 3D objects <code></code> and their metadata:</p>
2023-09-01 14:20:02 +02:00
2023-09-02 21:44:57 +02:00
<pre><code> +--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&amp;t=100,200 |
| │ |
| └── ◻ buttonB |
2023-09-04 21:20:59 +02:00
| └ href: other.fbx | &lt;-- file-agnostic (can be .gltf .obj etc)
2023-09-02 21:44:57 +02:00
| |
+--------------------------------------------------------+
2023-09-01 14:20:02 +02:00
</code></pre>
2023-09-04 21:20:59 +02:00
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
2023-09-01 14:20:02 +02:00
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
2023-09-05 19:14:10 +02:00
<strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>.</p>
2023-09-01 14:20:02 +02:00
2023-09-04 21:20:59 +02:00
<h1 id="embedding-3d-content">Embedding 3D content</h1>
2023-09-01 14:20:02 +02:00
2023-09-05 19:14:10 +02:00
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects <code></code> which embeds remote &amp; local 3D objects <code></code> (without) using queries:</p>
2023-09-04 21:20:59 +02:00
<pre><code> +--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br>
2023-09-05 19:14:10 +02:00
Resizing will be happen accordingly to its placeholder object <code>aquariumcube</code>, see chapter Scaling.<br></p>
2023-09-04 21:20:59 +02:00
2023-09-07 14:06:50 +02:00
<h1 id="xr-fragment-queries">XR Fragment queries</h1>
<p>Include, exclude, hide/shows objects using space-separated strings:</p>
<ul>
<li><code>#q=cube</code></li>
<li><code>#q=cube -ball_inside_cube</code></li>
<li><code>#q=* -sky</code></li>
<li><code>#q=-.language .english</code></li>
<li><code>#q=cube&amp;rot=0,90,0</code></li>
<li><code>#q=price:&gt;2 price:&lt;5</code></li>
</ul>
<p>It&rsquo;s simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:</p>
<ol>
<li>queries are showing/hiding objects <strong>only</strong> when defined as <code>src</code> value (prevents sharing of scene-tampered URL&rsquo;s).</li>
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against 3D object names or custom metadata-key(values)</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against tags (BibTeX) inside plaintext <code>src</code> values like <code>@cube{redcube, ...</code> e.g.</li>
<li><code>#</code> equals <code>#q=*</code></li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of 3D objects like <code>&quot;class&quot;:&quot;german&quot;</code></li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of (BibTeX) tags in XR Text objects like <code>@german{KarlHeinz, ...</code> e.g.</li>
</ol>
<blockquote>
<p><strong>For example</strong>: <code>#q=.foo</code> is a shorthand for <code>#q=class:foo</code>, which will select objects with custom property <code>class</code>:<code>foo</code>. Just a simple <code>#q=cube</code> will simply select an object named <code>cube</code>.</p>
</blockquote>
<ul>
<li>see <a href="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an example video here</a></li>
</ul>
<h2 id="including-excluding">including/excluding</h2>
<table>
<thead>
<tr>
<th>operator</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>*</code></td>
<td>select all objects (only useful in <code>src</code> custom property)</td>
</tr>
<tr>
<td><code>-</code></td>
<td>removes/hides object(s)</td>
</tr>
<tr>
<td><code>:</code></td>
<td>indicates an object-embedded custom property key/value</td>
</tr>
<tr>
<td><code>.</code></td>
<td>alias for <code>&quot;class&quot; :&quot;.foo&quot;</code> equals <code>class:foo</code></td>
</tr>
<tr>
<td><code>&gt;</code> <code>&lt;</code></td>
<td>compare float or int number</td>
</tr>
<tr>
<td><code>/</code></td>
<td>reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <code>src</code>) (*)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = <code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br> <code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
</blockquote>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a>
<a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</a>
<a href="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</a></p>
<h2 id="query-parser">Query Parser</h2>
<p>Here&rsquo;s how to write a query parser:</p>
<ol>
<li>create an associative array/object to store query-arguments as objects</li>
<li>detect object id&rsquo;s &amp; properties <code>foo:1</code> and <code>foo</code> (reference regex: <code>/^.*:[&gt;&lt;=!]?/</code> )</li>
<li>detect excluders like <code>-foo</code>,<code>-foo:1</code>,<code>-.foo</code>,<code>-/foo</code> (reference regex: <code>/^-/</code> )</li>
<li>detect root selectors like <code>/foo</code> (reference regex: <code>/^[-]?\//</code> )</li>
<li>detect class selectors like <code>.foo</code> (reference regex: <code>/^[-]?class$/</code> )</li>
<li>detect number values like <code>foo:1</code> (reference regex: <code>/^[0-9\.]+$/</code> )</li>
<li>expand aliases like <code>.foo</code> into <code>class:foo</code></li>
<li>for every query token split string on <code>:</code></li>
<li>create an empty array <code>rules</code></li>
<li>then strip key-operator: convert &ldquo;-foo&rdquo; into &ldquo;foo&rdquo;</li>
<li>add operator and value to rule-array</li>
<li>therefore we we set <code>id</code> to <code>true</code> or <code>false</code> (false=excluder <code>-</code>)</li>
<li>and we set <code>root</code> to <code>true</code> or <code>false</code> (true=<code>/</code> root selector is present)</li>
<li>we convert key &lsquo;/foo&rsquo; into &lsquo;foo&rsquo;</li>
<li>finally we add the key/value to the store like <code>store.foo = {id:false,root:true}</code> e.g.</li>
</ol>
<blockquote>
<p>An example query-parser (which compiles to many languages) can be <a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</a></p>
</blockquote>
<h2 id="xr-fragment-uri-grammar">XR Fragment URI Grammar</h2>
<pre><code>reserved = gen-delims / sub-delims
gen-delims = &quot;#&quot; / &quot;&amp;&quot;
sub-delims = &quot;,&quot; / &quot;=&quot;
</code></pre>
<blockquote>
<p>Example: <code>://foo.com/my3d.gltf#pos=1,0,0&amp;prio=-5&amp;t=0,100</code></p>
</blockquote>
<table>
<thead>
<tr>
<th>Demo</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>pos=1,2,3</code></td>
<td>vector/coordinate argument e.g.</td>
</tr>
<tr>
<td><code>pos=1,2,3&amp;rot=0,90,0&amp;q=.foo</code></td>
<td>combinators</td>
</tr>
</tbody>
</table>
2023-09-04 21:20:59 +02:00
<h1 id="text-in-xr-tagging-linking-to-spatial-objects">Text in XR (tagging,linking to spatial objects)</h1>
<p>We still think and speak in simple text, not in HTML or RDF.<br>
2023-09-05 19:14:10 +02:00
The most advanced human will probably not shout <code>&lt;h1&gt;FIRE!&lt;/h1&gt;</code> in case of emergency.<br>
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
2023-09-04 21:20:59 +02:00
Ideally metadata must come <strong>later with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>in another</strong> file.<br></p>
2023-09-02 21:44:57 +02:00
<blockquote>
2023-09-05 19:14:10 +02:00
<p>Humans first, machines (AI) later (<a href="#core-principle">core principle</a></p>
2023-09-02 21:44:57 +02:00
</blockquote>
2023-09-04 21:20:59 +02:00
<p>This way:</p>
<ol>
<li>XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <a href="https://visual.meta.info">visual-meta</a>).</li>
2023-09-05 19:14:10 +02:00
<li>XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX &lsquo;tags&rsquo;</li>
2023-09-06 15:43:29 +02:00
<li>Bibs/BibTeX-appendices is first-choice <strong>requestless metadata</strong>-layer for XR text, HTML/RDF/JSON is great (but fits better in the application-layer)</li>
2023-09-04 21:20:59 +02:00
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <a href="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <a href="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <a href="#core-principle">the core principle</a>)</li>
</ol>
2023-09-06 15:43:29 +02:00
<p>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BibTags</strong> :</p>
<pre><code> +---------------------------------------------+ +------------------+
| My Notes | | / \ |
| | | / \ |
| The houses here are built in baroque style. | | /house\ |
| | | |_____| |
| | +---------|--------+
| @house{houses, &gt;----'house'--------| class/name match?
| url = {#.house} &gt;----'houses'-------` class/name match?
| } |
+---------------------------------------------+
2023-09-04 21:20:59 +02:00
</code></pre>
2023-09-05 19:14:10 +02:00
<p>This allows instant realtime tagging of objects at various scopes:</p>
<table>
<thead>
<tr>
<th>scope</th>
<th>matching algo</th>
</tr>
</thead>
<tbody>
<tr>
<td><b id="textual-tagging">textual</b></td>
<td>text containing &lsquo;houses&rsquo; is now automatically tagged with &lsquo;house&rsquo; (incl. plaintext <code>src</code> child nodes)</td>
</tr>
<tr>
<td><b id="spatial-tagging">spatial</b></td>
<td>spatial object(s) with <code>&quot;class&quot;:&quot;house&quot;</code> (because of <code>{#.house}</code>) are now automatically tagged with &lsquo;house&rsquo; (incl. child nodes)</td>
</tr>
<tr>
<td><b id="supra-tagging">supra</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, named &lsquo;house&rsquo;, are automatically tagged with &lsquo;house&rsquo; (current node to root node)</td>
</tr>
<tr>
<td><b id="omni-tagging">omni</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name &lsquo;house&rsquo;, are automatically tagged with &lsquo;house&rsquo; (too node to all nodes)</td>
</tr>
<tr>
<td><b id="infinite-tagging">infinite</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name &lsquo;house&rsquo; or &lsquo;houses&rsquo;, are automatically tagged with &lsquo;house&rsquo; (too node to all nodes)</td>
</tr>
</tbody>
</table>
<p>This empowers the enduser spatial expressiveness (see <a href="#core-principle">the core principle</a>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX &lsquo;tags&rsquo; (humans first, machines later) is also demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail.</p>
2023-09-02 21:44:57 +02:00
2023-09-04 21:20:59 +02:00
<ol>
2023-09-06 15:13:36 +02:00
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</li>
2023-09-04 21:20:59 +02:00
</ol>
<blockquote>
2023-09-05 19:14:10 +02:00
<p>NOTE: infinite matches both &lsquo;house&rsquo; and &lsquo;houses&rsquo; in text, as well as spatial objects with <code>&quot;class&quot;:&quot;house&quot;</code> or name &ldquo;house&rdquo;. This multiplexing of id/category is deliberate because of <a href="#core-principle">the core principle</a>.</p>
2023-09-04 21:20:59 +02:00
</blockquote>
2023-09-02 21:44:57 +02:00
<h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2>
2023-09-04 21:20:59 +02:00
<p>The <code>src</code>-values work as expected (respecting mime-types), however:</p>
2023-09-02 21:44:57 +02:00
<p>The XR Fragment specification bumps the traditional default browser-mimetype</p>
<p><code>text/plain;charset=US-ASCII</code></p>
2023-09-04 21:20:59 +02:00
<p>to a green eco-friendly:</p>
2023-09-02 21:44:57 +02:00
2023-09-06 15:13:36 +02:00
<p><code>text/plain;charset=utf-8;bib=^@</code></p>
2023-09-02 21:44:57 +02:00
2023-09-06 15:13:36 +02:00
<p>This indicates that <a href="https://github.com/coderofsalvation/tagbibs">bibs</a> and <a href="https://en.wikipedia.org/wiki/BibTeX">bibtags</a> matching regex <code>^@</code> will automatically get filtered out, in order to:</p>
2023-09-02 21:44:57 +02:00
2023-09-04 21:20:59 +02:00
<ul>
2023-09-06 15:13:36 +02:00
<li>automatically detect links between textual/spatial objects</li>
<li>detect opiniated bibtag appendices (<a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
2023-09-04 21:20:59 +02:00
</ul>
2023-09-06 15:13:36 +02:00
<p>It&rsquo;s concept is similar to literate programming, which empower local/remote responses to:</p>
2023-09-04 21:20:59 +02:00
<ul>
2023-09-06 15:13:36 +02:00
<li>(de)multiplex human text and metadata in one go (see <a href="#core-principle">the core principle</a>)</li>
<li>no network-overhead for metadata (see <a href="#core-principle">the core principle</a>)</li>
<li>ensuring high FPS: HTML/RDF historically is too &lsquo;requesty&rsquo;/&lsquo;parsy&rsquo; for game studios</li>
2023-09-04 21:20:59 +02:00
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <a href="#core-principle">the core principle</a>)</li>
2023-09-06 15:13:36 +02:00
<li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
2023-09-04 21:20:59 +02:00
</ul>
<blockquote>
2023-09-06 15:13:36 +02:00
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
2023-09-04 21:20:59 +02:00
</blockquote>
<p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
2023-09-06 15:13:36 +02:00
To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).</p>
2023-09-04 21:20:59 +02:00
<blockquote>
<p>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</p>
</blockquote>
<h2 id="url-and-data-uri">URL and Data URI</h2>
2023-09-02 21:44:57 +02:00
<pre><code> +--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
2023-09-04 21:20:59 +02:00
| │ | | @friend{friends |
| └── ◻ note_canvas | | ... |
2023-09-07 14:06:50 +02:00
| └ src:`data:welcome human\n@...` | | } |
2023-09-04 21:20:59 +02:00
| | +------------------------+
2023-09-02 21:44:57 +02:00
| |
+--------------------------------------------------------------+
</code></pre>
2023-09-04 21:20:59 +02:00
<p>The enduser will only see <code>welcome human</code> and <code>Hello friends</code> rendered spatially.
The beauty is that text (AND visual-meta) in Data URI promotes rich copy-paste.
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name &lsquo;_canvas&rsquo;).
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</p>
2023-09-07 14:06:50 +02:00
<blockquote>
<p>additional tagging using <a href="https://github.com/coderofsalvation/tagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with &lsquo;todo&rsquo;, the enduser can type or speak <code>@note_canvas@todo</code></p>
</blockquote>
<p>The mapping between 3D objects and text (src-data) is simple (the :</p>
2023-09-04 21:20:59 +02:00
<p>Example:</p>
2023-09-07 14:06:50 +02:00
<pre><code> +------------------------------------------------+
| |
| index.gltf |
| │ |
| └── ◻ rentalhouse |
| └ class: house &lt;----------------- matches -------+
| └ ◻ note | |
| └ src:`data: todo: call owner | bib |
| @owner@house@todo | ----&gt; expands to @house{owner,
| | bibtex: }
| ` | @contact{
+------------------------------------------------+ }
2023-09-04 21:20:59 +02:00
</code></pre>
2023-09-07 14:06:50 +02:00
<p>Bi-directional mapping between 3D object names and/or classnames and text using bibs,BibTags &amp; XR Fragments, allows for rich interlinking between text and 3D objects:</p>
2023-09-04 21:20:59 +02:00
<ol>
2023-09-06 15:13:36 +02:00
<li>When the user surfs to https://&hellip;/index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.</li>
2023-09-04 21:20:59 +02:00
</ol>
2023-09-07 14:06:50 +02:00
<h2 id="bibs-bibtex-lowest-common-denominator-for-linking-data">Bibs &amp; BibTeX: lowest common denominator for linking data</h2>
2023-09-05 19:14:10 +02:00
<blockquote>
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote>
2023-09-04 21:20:59 +02:00
2023-09-05 19:14:10 +02:00
<p>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
2023-09-07 14:06:50 +02:00
It&rsquo;s a missing sensemaking precursor to extrospective RDF.<br>
2023-09-05 19:14:10 +02:00
BibTeX-appendices are already used in the digital AND physical world (academic books, <a href="https://visual-meta.info">visual-meta</a>), perhaps due to its terseness &amp; simplicity.<br>
2023-09-06 15:13:36 +02:00
In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (which has never leaked into the physical world like BibTex):</p>
2023-09-04 21:20:59 +02:00
<ol>
<li><b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li>
<li>an introspective &lsquo;sketchpad&rsquo; for metadata, which can (optionally) mature into RDF later</li>
</ol>
<table>
<thead>
<tr>
<th>characteristic</th>
2023-09-05 19:14:10 +02:00
<th>UTF8 Plain Text (with BibTeX)</th>
2023-09-04 21:20:59 +02:00
<th>RDF</th>
</tr>
</thead>
<tbody>
<tr>
<td>perspective</td>
<td>introspective</td>
<td>extrospective</td>
</tr>
2023-09-05 19:14:10 +02:00
<tr>
<td>structure</td>
<td>fuzzy (sensemaking)</td>
<td>precise</td>
</tr>
2023-09-04 21:20:59 +02:00
<tr>
<td>space/scope</td>
<td>local</td>
<td>world</td>
</tr>
<tr>
<td>everything is text (string)</td>
<td>yes</td>
<td>no</td>
</tr>
2023-09-06 15:13:36 +02:00
<tr>
2023-09-07 14:06:50 +02:00
<td>voice/paper-friendly</td>
2023-09-06 15:13:36 +02:00
<td><a href="https://github.com/coderofsalvation/tagbibs">bibs</a></td>
<td>no</td>
</tr>
2023-09-04 21:20:59 +02:00
<tr>
<td>leaves (dictated) text intact</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>markup language</td>
<td>just an appendix</td>
2023-09-04 21:20:59 +02:00
<td>~4 different</td>
</tr>
<tr>
<td>polyglot format</td>
<td>no</td>
<td>yes</td>
</tr>
<tr>
<td>easy to copy/paste content+metadata</td>
<td>yes</td>
2023-09-05 19:14:10 +02:00
<td>up to application</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>easy to write/repair for layman</td>
2023-09-04 21:20:59 +02:00
<td>yes</td>
<td>depends</td>
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>easy to (de)serialize</td>
2023-09-04 21:20:59 +02:00
<td>yes (fits on A4 paper)</td>
<td>depends</td>
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>infrastructure</td>
2023-09-04 21:20:59 +02:00
<td>selfcontained (plain text)</td>
<td>(semi)networked</td>
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>freeform tagging/annotation</td>
<td>yes, terse</td>
<td>yes, verbose</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>can be appended to text-content</td>
2023-09-04 21:20:59 +02:00
<td>yes</td>
2023-09-05 19:14:10 +02:00
<td>up to application</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>copy-paste text preserves metadata</td>
2023-09-04 21:20:59 +02:00
<td>yes</td>
2023-09-05 19:14:10 +02:00
<td>up to application</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>emoji</td>
<td>yes</td>
2023-09-05 19:14:10 +02:00
<td>depends on encoding</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>predicates</td>
<td>free</td>
2023-09-05 19:14:10 +02:00
<td>semi pre-determined</td>
2023-09-04 21:20:59 +02:00
</tr>
<tr>
<td>implementation/network overhead</td>
<td>no</td>
<td>depends</td>
</tr>
<tr>
<td>used in (physical) books/PDF</td>
<td>yes (visual-meta)</td>
<td>no</td>
</tr>
<tr>
2023-09-05 19:14:10 +02:00
<td>terse non-verb predicates</td>
2023-09-04 21:20:59 +02:00
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td>nested structures</td>
2023-09-06 15:13:36 +02:00
<td>no (but: BibTex rulers)</td>
2023-09-04 21:20:59 +02:00
<td>yes</td>
</tr>
</tbody>
</table>
2023-09-02 21:44:57 +02:00
2023-09-06 15:13:36 +02:00
<h2 id="xr-text-example-parser">XR Text example parser</h2>
2023-09-04 21:20:59 +02:00
2023-09-06 15:13:36 +02:00
<ol>
<li>The XR Fragments spec does not aim to harden the BiBTeX format</li>
<li>However, respect multi-line BibTex values because of <a href="#core-principle">the core principle</a></li>
<li>Expand bibs and rulers (like <code>${visual-meta-start}</code>) according to the <a href="https://github.com/coderofsalvation/tagbibs">tagbibs spec</a></li>
2023-09-06 15:43:29 +02:00
<li>BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype <code>text/plain;charset=utf-8;bib=^@</code></li>
2023-09-06 15:13:36 +02:00
</ol>
<p>Here&rsquo;s an XR Text (de)multiplexer in javascript, which ticks all the above boxes:</p>
2023-09-04 21:20:59 +02:00
<pre><code>xrtext = {
2023-09-06 15:13:36 +02:00
decode: (str) =&gt; {
// bibtex: ↓@ ↓&lt;tag|tag{phrase,|{ruler}&gt; ↓property ↓end
let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
let tags = [], text='', i=0, prop=''
var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
bibtex = lines.join('\n').substr( text.length )
bibtex.replace( bibs.regex , (m,k,v) =&gt; {
tok = m.substr(1).split(&quot;@&quot;)
match = tok.shift()
tok.map( (t) =&gt; bibs.tags[match] = `@${t}{${match},\n}\n` )
})
bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
bibtex.split( pat[0] ).map( (t) =&gt; {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv =&gt; {
if( !(kv = kv.trim()) || kv == &quot;}&quot; ) return
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf(&quot;{&quot;)+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
2023-09-04 21:20:59 +02:00
})
2023-09-06 15:13:36 +02:00
return {text, tags}
2023-09-04 21:20:59 +02:00
},
2023-09-06 15:13:36 +02:00
encode: (text,tags) =&gt; {
let str = text+&quot;\n&quot;
for( let i in tags ){
let item = tags[i]
if( item.ruler ){
str += `@${item.ruler}\n`
continue;
}
str += `@${item.k}\n`
for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
str += `}\n`
}
return str
2023-09-04 21:20:59 +02:00
}
}
2023-09-02 21:44:57 +02:00
</code></pre>
2023-09-07 14:06:50 +02:00
<p>The above functions (de)multiplexe text/metadata, expands bibs, (de)serialize bibtex (and all fits more or less on one A4 paper)</p>
2023-09-06 15:13:36 +02:00
2023-09-04 21:20:59 +02:00
<blockquote>
2023-09-06 15:13:36 +02:00
<p>above can be used as a startingpoint for LLVM&rsquo;s to translate/steelman to a more formal form/language.</p>
2023-09-04 21:20:59 +02:00
</blockquote>
2023-09-06 15:13:36 +02:00
<pre><code>str = `
hello world
@hello@greeting
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text &amp; bibtex
tags.find( (t) =&gt; t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text &amp; bibtex back together
</code></pre>
2023-09-07 14:06:50 +02:00
<p>This outputs:</p>
2023-09-06 15:13:36 +02:00
2023-09-07 14:06:50 +02:00
<pre><code>hello world
2023-09-06 15:13:36 +02:00
2023-09-07 14:06:50 +02:00
@greeting{hello,
2023-09-06 15:13:36 +02:00
}
2023-09-07 14:06:50 +02:00
@{some-section}
@flap{
asdf = {1}
}
@bar{
abc = {123}
2023-09-06 15:13:36 +02:00
}
</code></pre>
2023-09-04 21:20:59 +02:00
<h1 id="hyper-copy-paste">HYPER copy/paste</h1>
<p>The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
2023-09-07 14:06:50 +02:00
XR Text according to the XR Fragment spec, allows HYPER-copy/paste: time, space and text interlinked.
2023-09-04 21:20:59 +02:00
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</p>
2023-09-02 21:44:57 +02:00
2023-09-05 19:14:10 +02:00
<ol>
2023-09-04 21:20:59 +02:00
<li>time/space: 3D object (current animation-loop)</li>
2023-09-05 19:14:10 +02:00
<li>text: TeXt object (including BibTeX/visual-meta if any)</li>
2023-09-04 21:20:59 +02:00
<li>interlinked: Collected objects by visual-meta tag</li>
2023-09-05 19:14:10 +02:00
</ol>
2023-09-02 21:44:57 +02:00
2023-09-01 14:20:02 +02:00
<h1 id="security-considerations">Security Considerations</h1>
2023-09-04 21:20:59 +02:00
<p>Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can :</p>
<ul>
<li>filter out sensitive data when copy/pasting (XR text with <code>class:secret</code> e.g.)</li>
</ul>
2023-09-01 14:20:02 +02:00
<h1 id="iana-considerations">IANA Considerations</h1>
<p>This document has no IANA actions.</p>
<h1 id="acknowledgments">Acknowledgments</h1>
<p>TODO acknowledge.</p>
</section>
</body>
</html>