xrfragment/doc/RFC_XR_Fragments.html

1887 lines
91 KiB
HTML
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html>
<head>
<title>XR Fragments</title>
<meta name="GENERATOR" content="github.com/mmarkdown/mmark Mmark Markdown Processor - mmark.miek.nl">
<meta charset="utf-8">
</head>
<body>
<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<style type="text/css">
body{
font-family: monospace;
max-width: 1000px;
font-size: 15px;
padding: 0% 10%;
line-height: 30px;
color:#555;
background:#F7F7F7;
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
pre{
line-height: 18px;
overflow: auto;
padding: 12px;
}
pre + code {
background:#DDD;
}
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
th {
border-bottom: 1px solid #000;
text-align: left;
padding-right:45px;
padding-left:7px;
background: #DDD;
}
td {
border-bottom: 1px solid #CCC;
font-size:13px;
}
</style>
<br>
<h1>XR Fragments</h1>
<br>
<pre>
stream: IETF
area: Internet
status: informational
author: Leon van Kammen
date: 2023-04-12T00:00:00Z
workgroup: Internet Engineering Task Force
value: draft-XRFRAGMENTS-leonvankammen-00
</pre>
<h1 class="special" id="abstract">Abstract</h1>
<p>This draft is a specification for interactive URI-controllable 3D files, enabling <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation, to enable a spatial web for hypermedia browsers with- or without a network-connection.<br>
The specification uses <a href="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a> to promote spatial addressibility, sharing, navigation, filtering and databinding objects for (XR) Browsers.<br>
XR Fragments allows us to better use existing metadata inside 3D scene(files), by connecting it to proven technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a>.<br>
XR Fragments views spatial webs thru the lens of 3D scene URI&rsquo;s, rather than thru code(frameworks) or protocol-specific browsers (webbrowser e.g.).</p>
<blockquote>
<p>XR Fragments is a <b>Meta scene format</b> which leverages heuristic rules derived from any 3D scene or well-established 3D file formats, to extract meaningful features from scene hierarchies.<br>
These heuristics, enable features that are both meaningful and consistent across different scene representations, allowing <b>higher interop</b> between fileformats, 3D editors, viewers and game-engines.</p>
<p>Almost every idea in this document is demonstrated at <a href="https://xrfragment.org">https://xrfragment.org</a></p>
</blockquote>
<section data-matter="main">
<h1 id="introduction">Introduction</h1>
<p>How can we add more control to existing text and 3D scenes, without introducing new dataformats?<br>
Historically, there&rsquo;s many attempts to create the ultimate 3D fileformat.<br>
The lowest common denominator is: designers describing/tagging/naming things using <strong>plain text</strong>.<br>
XR Fragments exploits the fact that all 3D models already contain such metadata:</p>
<p><strong>XR Fragments allows controlling of metadata in 3D scene(files) using URI&rsquo;s</strong></p>
<p>It solves:</p>
<ol>
<li>addressibility and <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> using src/href spatial metadata</li>
<li>Interlinking text &amp; spatial objects by collapsing space into a Word Graph (XRWG) to show <a href="#visible-links">visible links</a></li>
<li>unlocking spatial potential of the (originally 2D) hashtag (which jumps to a chapter) for navigating XR documents</li>
<li>refraining from introducing scripting-engines for mundane tasks (and preventing its inevitable security-headaches)</li>
<li>the gap between text an 3d objects: object-names directly map to hashtags (=fragments), which allows 3D to text transcription.</li>
</ol>
<blockquote>
<p>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</p>
</blockquote>
<h1 id="core-principle">Core principle</h1>
<p><strong>XR Fragments allows controlling 3D models using URLs, based on (non)existing metadata via URI&rsquo;s</strong></p>
<p>XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.<br>
Instead of forcing authors to combine 3D/2D objects programmatically (publishing thru a game-editor e.g.), XR Fragments <strong>integrates all</strong> which allows a universal viewing experience.<br></p>
<pre><code> +───────────────────────────────────────────────────────────────────────────────────────────────+
│ │
│ U R N │
│ U R L | │
│ | |-----------------+--------| │
│ +--------------------------------------------------| │
│ | │
│ + https://foo.com/some/foo/scene.glb#someview &lt;-- http URI (=URL and has URN) │
│ | │
│ + ipfs://cfe0987ec9r9098ecr/cats.fbx#someview &lt;-- an IPFS URI (=URL and has URN) │
│ │
│ ec09f7e9cf8e7f09c8e7f98e79c09ef89e000efece8f7ecfe9fe &lt;-- an interpeer URI │
│ │
│ │
│ |------------------------+-------------------------| │
│ | │
│ U R I │
│ │
+───────────────────────────────────────────────────────────────────────────────────────────────+
</code></pre>
<p>Fact: our typical browser URL&rsquo;s are just <strong>a possible implementation</strong> of URI&rsquo;s (for untapped humancentric potential of URI&rsquo;s <a href="https://interpeer.io">see interpeer.io</a>)</p>
<blockquote>
<p>XR Fragments does not look at XR (or the web) thru the lens of HTML or URLs.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective.</p>
</blockquote>
<p>Below you can see how this translates back into good-old URLs:</p>
<pre><code> +───────────────────────────────────────────────────────────────────────────────────────────────+
│ │
│ the soul of any URL: ://macro /meso ?micro #nano │
│ │
│ 2D URL: ://library.com /document ?search #chapter │
│ xrf:// │
│ 4D URL: ://park.com /4Dscene.fbx ─&gt; ?other.glb ─&gt; #view ───&gt; hashbus │
│ │ #filter │ │
│ │ #tag │ │
│ │ (hypermediatic) #material │ │
│ │ ( feedback ) #animation │ │
│ │ ( loop ) #texture │ │
│ │ #variable │ │
│ │ │ │
│ XRWG &lt;─────────────────────&lt;─────────────+ │
│ │ │ │
│ └─ objects ──────────────&gt;─────────────+ │
│ │
│ │
+───────────────────────────────────────────────────────────────────────────────────────────────+
</code></pre>
<blockquote>
<p>?-linked and #-linked navigation are JUST one possible way to implement XR Fragments: the essential goal is to allow a Hypermediatic FeedbackLoop (HFL) between external and internal 4D navigation.</p>
</blockquote>
<p>Traditional webbrowsers can become 4D document-ready by:</p>
<h1 id="the-xr-fragments-trinity">The XR Fragments Trinity</h1>
<p>XR Fragments utilizes URLs:</p>
<ol>
<li>for 3D viewers/browser to manipulate the camera or objects (via URLbar)</li>
<li>as <strong>implicit</strong> metadata to reference (nested) objects <strong>inside</strong> 3D scene-file (local and remote)</li>
<li>via <strong>explicit</strong> metadata (&lsquo;extras&rsquo;) <strong>inside</strong> 3D scene-files (interaction e.g.) or</li>
<li>[optionally for developers] via <strong>explicit</strong> metadata <strong>outside</strong> 3D scene-files (via <a href="https://en.wikipedia.org/wiki/Sidecar_file">sidecarfile</a>)</li>
</ol>
<h1 id="list-of-uri-fragments">List of URI Fragments</h1>
<table>
<thead>
<tr>
<th>fragment</th>
<th>type</th>
<th>example</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>#pos</code></td>
<td>vector3</td>
<td><code>#pos=0.5,0,0</code> <code>#pos=room</code> <code>#pos=cam2</code></td>
<td>positions/parents camera(rig) (or XR floor) to xyz-coord/object/camera</td>
</tr>
<tr>
<td><code>#rot</code></td>
<td>vector3</td>
<td><code>#rot=0,90,0</code></td>
<td>rotates camera to xyz-coord 0.5,0,0</td>
</tr>
<tr>
<td><a href="https://www.w3.org/TR/media-frags/">Media Fragments</a></td>
<td><a href="#media%20fragments%20and%20datatypes">media fragment</a></td>
<td><code>#t=0,2&amp;loop</code></td>
<td>play (and loop) 3D animation from 0 seconds till 2 seconds</td>
</tr>
</tbody>
</table>
<h1 id="list-of-explicit-metadata">List of *<em>explicit</em> metadata</h1>
<p>These are the possible &lsquo;extras&rsquo; for 3D nodes and sidecar-files</p>
<table>
<thead>
<tr>
<th>key</th>
<th>type</th>
<th>example (JSON)</th>
<th>function</th>
<th>existing compatibility</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>href</code></td>
<td>string</td>
<td><code>&quot;href&quot;: &quot;b.gltf&quot;</code></td>
<td>XR teleport</td>
<td>custom property in 3D fileformats</td>
</tr>
<tr>
<td><code>src</code></td>
<td>string</td>
<td><code>&quot;src&quot;: &quot;#cube&quot;</code></td>
<td>XR embed / teleport</td>
<td>custom property in 3D fileformats</td>
</tr>
<tr>
<td><code>tag</code></td>
<td>string</td>
<td><code>&quot;tag&quot;: &quot;cubes geo&quot;</code></td>
<td>tag object (for filter-use / XRWG highlighting)</td>
<td>custom property in 3D fileformats</td>
</tr>
<tr>
<td><code>#</code></td>
<td>string</td>
<td><code>&quot;#&quot;: &quot;#mypreset</code></td>
<td>trigger default fragment on load</td>
<td>custom property in 3D fileformats</td>
</tr>
</tbody>
</table>
<blockquote>
<p>Supported popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p>
</blockquote>
<h2 id="sidecar-file">Sidecar-file</h2>
<blockquote>
<p>NOTE: sidecar-files break the portability of XR (Fragments) experiences, therefore side-car files are discouraged for consumer usage/sharing. However, they can accomodate developers or applications who (for whatever reason) must not modify the 3D scene-file (a <code>.glb</code> e.g.).</p>
</blockquote>
<p>For developers, sidecar-file can allow for defining <strong>explicit</strong> XR Fragments metadata, outside of the 3D file.<br>
This can be done via a JSON-pointers <a href="https://www.rfc-editor.org/rfc/rfc6901">RFC6901</a> in a JSON <a href="https://en.wikipedia.org/wiki/Sidecar_file">sidecar-file</a>:</p>
<ul>
<li>experience.glb</li>
<li>experience.json</li>
</ul>
<pre><code class="language-json">{
&quot;/&quot;:{
&quot;#&quot;: &quot;#-penguin&quot;,
&quot;aria-description&quot;: &quot;description of scene&quot;,
},
&quot;/room/chair&quot;: {
&quot;href&quot;: &quot;#penguin&quot;
}
}
</code></pre>
<blockquote>
<p>This would mean: hide object(s) with name or <code>tag</code>-value &lsquo;penguin&rsquo; upon scene-load, and show it when the user clicks the chair</p>
</blockquote>
<p>So after loading <code>experience.glb</code> the existence of <code>experience.json</code> is detected, to apply the explicit metadata.<br>
The sidecar will define (or <strong>override</strong> already existing) extras, which can be handy for multi-user platforms (offer 3D scene customization/personalization to users).</p>
<blockquote>
<p>In THREE.js-code this would boil down to:</p>
</blockquote>
<pre><code class="language-javascript"> scene.userData['#'] = &quot;#chair&amp;penguin&quot;
scene.userData['aria-description'] = &quot;description of scene&quot;
scene.getObjectByName(&quot;room&quot;).getObjectByName(&quot;chair&quot;).userData.href = &quot;#penguin&quot;
// now the XR Fragments parser can process the XR Fragments userData 'extras' in the scene
</code></pre>
<h1 id="hypermediatic-feedbackloop-for-xr-browsers">Hypermediatic FeedbackLoop for XR browsers</h1>
<p><code>href</code> metadata traditionally implies <strong>click</strong> AND <strong>navigate</strong>, however XR Fragments adds stateless <strong>click</strong> (<code>xrf://#....</code>) or <strong>navigate</strong> (<code>xrf://#pos=...</code>)
as well (which allows many extra interactions which otherwise need a scripting language). This is known as <strong>hashbus</strong>-only events (see image above).</p>
<blockquote>
<p>Being able to use the same URI Fragment DSL for navigation (<code>href: #foo</code>) as well as interactions (<code>href: xrf://#bar</code>) greatly simplifies implementation, increases HFL, and reduces need for scripting languages.</p>
</blockquote>
<p>This opens up the following benefits for traditional &amp; future webbrowsers:</p>
<ul>
<li><a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> loading/clicking 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li>
<li>allowing 3D assets/nodes to publish XR Fragments to themselves/eachother using the <code>xrf://</code> hashbus</li>
<li>collapsing the 3D scene to an wordgraph (for essential navigation purposes) controllable thru a hash(tag)bus</li>
<li>completely bypassing the security-trap of loading external scripts (by loading 3D model-files, not HTML-javascriptable resources)</li>
</ul>
<p>XR Fragments itself are <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p>
<table>
<thead>
<tr>
<th>principle</th>
<th>XR 4D URL</th>
<th>HTML 2D URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>the XRWG</td>
<td>wordgraph (collapses 3D scene to tags)</td>
<td>Ctrl-F (find)</td>
</tr>
<tr>
<td>the hashbus</td>
<td>hashtags alter camera/scene/object-projections</td>
<td>hashtags alter document positions</td>
</tr>
<tr>
<td>src metadata</td>
<td>renders content and offers sourceportation</td>
<td>renders content</td>
</tr>
<tr>
<td>href metadata</td>
<td>teleports to other XR document</td>
<td>jumps to other HTML document</td>
</tr>
<tr>
<td>href metadata</td>
<td>triggers predefined view</td>
<td>Media fragments</td>
</tr>
<tr>
<td>href metadata</td>
<td>triggers camera/scene/object/projections</td>
<td>n/a</td>
</tr>
<tr>
<td>href metadata</td>
<td>draws visible connection(s) for XRWG &lsquo;tag&rsquo;</td>
<td>n/a</td>
</tr>
<tr>
<td>href metadata</td>
<td>filters certain (in)visible objects</td>
<td>n/a</td>
</tr>
<tr>
<td>href metadata</td>
<td>href=&ldquo;xrf://#-foo&amp;bar&rdquo;</td>
<td>href=&ldquo;javascript:hideFooAndShowBar()`</td>
</tr>
<tr>
<td></td>
<td>(this does not update topLevel URI)</td>
<td>(this is non-standard, non-hypermediatic)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>An important aspect of HFL is that URI Fragments can be triggered without updating the top-level URI (default href-behaviour) thru their own &lsquo;bus&rsquo; (<code>xrf://#.....</code>). This decoupling between navigation and interaction prevents non-standard things like (<code>href</code>:<code>javascript:dosomething()</code>).</p>
</blockquote>
<h1 id="conventions-and-definitions">Conventions and Definitions</h1>
<p>See appendix below in case certain terms are not clear.</p>
<h2 id="xr-fragment-url-grammar">XR Fragment URL Grammar</h2>
<p>For typical HTTP-like browsers/applications:</p>
<pre><code>reserved = gen-delims / sub-delims
gen-delims = &quot;#&quot; / &quot;&amp;&quot;
sub-delims = &quot;,&quot; / &quot;=&quot;
</code></pre>
<blockquote>
<p>Example: <code>://foo.com/my3d.gltf#pos=1,0,0&amp;prio=-5&amp;t=0,100</code></p>
</blockquote>
<table>
<thead>
<tr>
<th>Demo</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>pos=1,2,3</code></td>
<td>vector/coordinate argument e.g.</td>
</tr>
<tr>
<td><code>pos=1,2,3&amp;rot=0,90,0&amp;foo</code></td>
<td>combinators</td>
</tr>
</tbody>
</table>
<blockquote>
<p>this is already implemented in all browsers</p>
</blockquote>
<p>Pseudo (non-native) browser-implementations (supporting XR Fragments using HTML+JS e.g.) can use the <code>?</code> search-operator to address outbound content.<br>
In other words, the URL updates to: <code>https://me.com?https://me.com/other.glb</code> when navigating to <code>https://me.com/other.glb</code> from inside a <code>https://me.com</code> WebXR experience e.g.<br>
That way, if the link gets shared, the XR Fragments implementation at <code>https://me.com</code> can load the latter (and still indicates which XR Fragments entrypoint-experience/client was used).</p>
<h1 id="spatial-referencing-3d">Spatial Referencing 3D</h1>
<p>XR Fragments assume the following objectname-to-URIFragment mapping:</p>
<pre><code>
my.io/scene.fbx
+─────────────────────────────+
│ sky │ src: http://my.io/scene.fbx#sky (includes building,mainobject,floor)
│ +─────────────────────────+ │
│ │ building │ │ src: http://my.io/scene.fbx#building (includes mainobject,floor)
│ │ +─────────────────────+ │ │
│ │ │ mainobject │ │ │ src: http://my.io/scene.fbx#mainobject (includes floor)
│ │ │ +─────────────────+ │ │ │
│ │ │ │ floor │ │ │ │ src: http://my.io/scene.fbx#floor (just floor object)
│ │ │ │ │ │ │ │
│ │ │ +─────────────────+ │ │ │
│ │ +─────────────────────+ │ │
│ +─────────────────────────+ │
+─────────────────────────────+
</code></pre>
<blockquote>
<p>Every 3D fileformat supports named 3D object, and this name allows URLs (fragments) to reference them (and their children objects).</p>
</blockquote>
<p>Clever nested design of 3D scenes allow great ways for re-using content, and/or previewing scenes.<br>
For example, to render a portal with a preview-version of the scene, create an 3D object with:</p>
<ul>
<li>href: <code>https://scene.fbx</code></li>
<li>src: <code>https://otherworld.gltf#mainobject</code></li>
</ul>
<blockquote>
<p>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <code>src</code> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</p>
</blockquote>
<h2 id="level2-implicit-uri-fragments">Level2: Implicit URI Fragments</h2>
<p>These fragments are derived from objectnames (or their extras) within a 3D scene, and trigger certain actions when evaluated by the browser:</p>
<table>
<thead>
<tr>
<th></th>
<th>fragment</th>
<th>type</th>
<th>example</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>PRESET</strong></td>
<td><code>#&lt;preset&gt;</code></td>
<td>string</td>
<td><code>#cubes</code></td>
<td>evaluates preset (<code>#foo&amp;bar</code>) when a scene contains extra (<code>#cubes: #foo&amp;bar</code> e.g.) while URL-browserbar reflects <code>#cubes</code>. Only works when metadata-key starts with <code>#</code></td>
</tr>
<tr>
<td><strong>FOCUS</strong></td>
<td><code>#&lt;tag_or_objectname&gt;</code></td>
<td>string</td>
<td><code>#person</code></td>
<td>(and show) object(s) with <code>tag: person</code> or name <code>person</code> (XRWG lookup)</td>
</tr>
<tr>
<td><strong>FILTERS</strong></td>
<td><code>#[!][-]&lt;tag_or_objectname&gt;[*]</code></td>
<td>string</td>
<td><code>#person</code> (<code>#-person</code>)</td>
<td>will reset (<code>!</code>), show/focus or hide (<code>-</code>) focus object(s) with <code>tag: person</code> or name <code>person</code> by looking up XRWG (<code>*</code>=including children)</td>
</tr>
<tr>
<td><strong>MATERIALUPDATE</strong></td>
<td><code>#&lt;tag_or_objectname&gt;[*]=&lt;materialname&gt;</code></td>
<td>string=string</td>
<td><code>#car=metallic</code></td>
<td>sets material of car to material with name <code>metallic</code> (<code>*</code>=including children)</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td><code>#soldout*=halfopacity</code></td>
<td>set material of objects tagged with <code>product</code> to material with name <code>metallic</code></td>
</tr>
<tr>
<td><strong>VARIABLE UPDATE</strong></td>
<td><code>#&lt;variable&gt;=&lt;metadata-key&gt;</code></td>
<td>string=string</td>
<td><code>#foo=bar</code></td>
<td>sets <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Template</a> variable <code>foo</code> to the value <code>#t=0</code> from <strong>existing</strong> object metadata (<code>bar</code>:<code>#t=0</code> e.g.), This allows for reactive <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Template</a> defined in object metadata elsewhere (<code>src</code>:<code>://m.com/cat.mp4#{foo}</code> e.g., to play media using <a href="https://www.w3.org/TR/media-frags/#valid-uri">media fragment URI</a>). NOTE: metadata-key should not start with <code>#</code></td>
</tr>
<tr>
<td><strong>ANIMATION</strong></td>
<td><code>#&lt;tag_or_objectname&gt;=&lt;animationname&gt;</code></td>
<td>string=string</td>
<td><code>#people=walk</code> <code>#people=noanim</code></td>
<td>assign a different animation to object(s)</td>
</tr>
</tbody>
</table>
<h2 id="media-fragments-and-datatypes">media fragments and datatypes</h2>
<blockquote>
<p>NOTE: below the word &lsquo;play&rsquo; applies to 3D animations embedded in the 3D scene(file) <strong>but also</strong> media defined in <code>src</code>-metadata like audio/video-files (mp3/mp4 e.g.)</p>
</blockquote>
<table>
<thead>
<tr>
<th>type</th>
<th>syntax</th>
<th>example</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td>vector2</td>
<td>x,y</td>
<td>2,3.0</td>
<td>2-dimensional vector</td>
</tr>
<tr>
<td>vector3</td>
<td>x,y,z</td>
<td>2,3.0,4</td>
<td>3-dimensional vector</td>
</tr>
<tr>
<td>temporal W3C media fragment</td>
<td>t=x</td>
<td>0</td>
<td>play from 0 seconds to end (and stop)</td>
</tr>
<tr>
<td>temporal W3C media fragment</td>
<td>t=x,y</td>
<td>0,2</td>
<td>play from 0 seconds till 2 seconds (and stop)</td>
</tr>
<tr>
<td>temporal W3C media fragment *</td>
<td>s=x</td>
<td>1</td>
<td>set playback speed of audio/video/3D anim</td>
</tr>
<tr>
<td>temporal W3C media fragment *</td>
<td>[-]loop</td>
<td>loop</td>
<td>enable looped playback of audio/video/3D anim</td>
</tr>
<tr>
<td></td>
<td></td>
<td>-loop</td>
<td>disable looped playback (does not affect playbackstate of media)</td>
</tr>
<tr>
<td>vector2</td>
<td>uv=u,v,uspeed,vspeed</td>
<td>0,0</td>
<td>set uv offset instantly (default speed = <code>1,1</code>)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>+0.5,+0.5</td>
<td>scroll instantly by adding 0.5 to the current uv coordinates</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0.2,1,0.1,0.1</td>
<td>scroll (lerp) to uv coordinate <code>0,2,1</code> with <code>0.1</code> units per second</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0,0,0,+0.1</td>
<td>scroll v coordinates with <code>0.1</code> units per second (infinitely)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>+0.5,+0.5</td>
<td>scroll instantly by adding 0.5 to the current uv coordinates</td>
</tr>
<tr>
<td>media parameter (shader uniform)</td>
<td>u:<uniform>=<string</td>
<td>float</td>
<td>vec2</td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = this is extending the <a href="https://www.w3.org/TR/media-frags/#mf-advanced">W3C media fragments</a> with (missing) playback/viewport-control. Normally <code>#t=0,2</code> implies setting start/stop-values AND starting playback, whereas <code>#s=0&amp;loop</code> allows pausing a video, speeding up/slowing down media, as well as enabling/disabling looping.</p>
<p>The rationale for <code>uv</code> is that the <code>xywh</code> Media Fragment deals with rectangular media, which does not translate well to 3D models (which use triangular polygons, not rectangular) positioned by uv-coordinates. This also explains the absense of a <code>scale</code> or <code>rotate</code> primitive, which is challenged by this, as well as multiple origins (mesh- or texture).</p>
</blockquote>
<p>Example URI&rsquo;s:</p>
<ul>
<li><code>https://images.org/credits.jpg#uv=0,0,0,+0.1</code> (infinite vertical texturescrolling)</li>
<li><code>https://video.org/organogram.mp4#t=0&amp;loop&amp;uv=0.1,0.1,0.3,0.3</code> (animated tween towards region in looped video)</li>
<li><code>https://shaders.org/plasma.glsl#t=0&amp;u:col2=0,1,0</code> (red-green shader plasma starts playing from time-offset 0)</li>
</ul>
<pre><code> +──────────────────────────────────────────────────────────+
│ │
│ index.gltf#playall │
│ │ │
│ ├ # : #t=0&amp;shared=play │ apply default XR Fragment on load (`t` plays global 3D animation timeline)
│ ├ play : #t=0&amp;loop │ variable for [URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570)
│ │ │
│ ├── ◻ plane (with material) │
│ │ └ #: #uv=0,0,0,+0.1 │ infinite texturescroll `v` of uv·coordinates with 0.1/fps
│ │ │
│ ├── ◻ plane │
│ │ └ src: foo.jpg#uv=0,0,0,+0.1 │ infinite texturescroll `v` of uv·coordinates with 0.1/fps
│ │ │
│ ├── ◻ media │
│ │ └ src: cat.mp4#t=l:2,10&amp;uv=0.5,0.5 │ loop cat.mp4 (or mp3/wav/jpg) between 2 and 10 seconds (uv's shifted with 0.5,0.5)
│ │ │
│ └── ◻ wall │
│ ├ href: #color=blue │ updates uniform values (IFS shader e.g.)
│ ├ blue: t=0&amp;u:col=0,0,1 │ variable for [Level1 URI Templates (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570)
│ └ src: ://a.com/art.glsl#{color}&amp;{shared} │ .fs/.vs/.glsl/.wgsl etc shader [Level1 URI Template (RFC6570)](https://www.rfc-editor.org/rfc/rfc6570)
│ │
│ │
+──────────────────────────────────────────────────────────+
&gt; NOTE: URI Template variables are immutable and respect scope: in other words, the end-user cannot modify `blue` by entering an URL like `#blue=.....` in the browser URL, and `blue` is not accessible by the plane/media-object (however `{play}` would work).
</code></pre>
<h1 id="navigating-3d">Navigating 3D</h1>
<table>
<thead>
<tr>
<th>fragment</th>
<th>type</th>
<th>functionality</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>#pos</b>=0,0,0</td>
<td>vector3</td>
<td>position camera to 0,0,0 (+userheight in VR)</td>
</tr>
<tr>
<td><b>#pos</b>=room</td>
<td>string</td>
<td>position camera to position of objectname <code>room</code> (+userheight in VR)</td>
</tr>
<tr>
<td><b>#rot</b>=0,90,0</td>
<td>vector3</td>
<td>rotate camera</td>
</tr>
</tbody>
</table>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js">» example implementation</a><br>
<a href="https://github.com/coderofsalvation/xrfragment/issues/5">» discussion</a><br></p>
<p>Here&rsquo;s the basic <strong>level1</strong> flow (with optional level2 features):</p>
<ol>
<li>the Y-coordinate of <code>pos</code> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets), except in case of camera-switching.</li>
<li>set the position of the camera accordingly to the vector3 values of <code>#pos</code></li>
<li>if the referenced <code>#pos</code> object is animated, parent the current camera to that object (so it animates too)</li>
<li><code>rot</code> sets the rotation of the camera (only for non-VR/AR headsets, however a camera-value overrules this)</li>
<li><strong>level2</strong>: mediafragment <code>t</code> in the top-URL sets the playbackspeed and animation-range of the global scene animation</li>
<li>before scene load: the scene is cleared</li>
<li><strong>level2</strong>: after scene load: in case the scene (rootnode) contains an <code>#</code> default view with a fragment value: execute non-positional fragments via the hashbus (no top-level URL change)</li>
<li><strong>level2</strong>: after scene load: in case the scene (rootnode) contains an <code>#</code> default view with a fragment value: execute positional fragment via the hashbus + update top-level URL</li>
<li><strong>level2</strong>: in case of no default <code>#</code> view on the scene (rootnode), default player(rig) position <code>0,0,0</code> is assumed.</li>
<li>in case a <code>href</code> does not mention any <code>pos</code>-coordinate, the current position will be assumed</li>
</ol>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph which contains 3D objects <code></code> and their metadata:</p>
<pre><code> +────────────────────────────────────────────────────────+
│ │
│ index.gltf │
│ │ │
│ ├── ◻ buttonA │
│ │ └ href: #pos=1,0,1&amp;t=100,200 │
│ │ │
│ └── ◻ buttonB │
│ └ href: other.fbx │ &lt;── file─agnostic (can be .gltf .obj etc)
│ │
+────────────────────────────────────────────────────────+
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will <strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>, and assume <code>pos=0,0,0</code>.</p>
<h1 id="top-level-url-processing">Top-level URL processing</h1>
<blockquote>
<p>Example URL: <code>://foo/world.gltf#cube&amp;pos=0,0,0</code></p>
</blockquote>
<p>The URL-processing-flow for hypermedia browsers goes like this:</p>
<ol>
<li>IF a <code>#cube</code> matches a custom property-key (of an object) in the 3D file/scene (<code>#cube</code>: <code>#......</code>) <b>THEN</b> execute that predefined_view.</li>
<li>IF scene operators (<code>pos</code>) and/or animation operator (<code>t</code>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li>
<li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <code>#pos=0,0,0</code> (<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</a>)</li>
<li>IF a <code>#cube</code> matches the name (of an object) in the 3D file/scene then draw a line from the enduser(&rsquo;s heart) to that object (to highlight it).</li>
<li>IF a <code>#cube</code> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li>
</ol>
<h1 id="embedding-xr-content-using-src">Embedding XR content using src</h1>
<p><code>src</code> is the 3D version of the <a target="_blank" href="https://www.w3.org/html/wiki/Elements/iframe">iframe</a>.<br>
It instances content (in objects) in the current scene/asset, and follows similar logic like the previous chapter, except that it does not modify the camera.</p>
<table>
<thead>
<tr>
<th>fragment</th>
<th>type</th>
<th>example value</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>src</code></td>
<td>string (uri, hashtag/filter)</td>
<td><code>#cube</code><br><code>#sometag</code><br>#cube&amp;-ball_inside_cube<code>&lt;br&gt;</code>#-sky&amp;-rain<code>&lt;br&gt;</code>#-language&amp;english<code>&lt;br&gt;</code>#price=&gt;5<code>&lt;br&gt;</code><a href="https://linux.org/penguin.png`">https://linux.org/penguin.png`</a><br><code>https://linux.world/distrowatch.gltf#t=1,100</code><br><code>linuxapp://conference/nixworkshop/apply.gltf#-cta&amp;cta_apply</code><br><code>androidapp://page1?tutorial#pos=0,0,1&amp;t1,100</code><br><code>foo.mp3#0,0,0</code></td>
</tr>
</tbody>
</table>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects <code></code> which embeds remote &amp; local 3D objects <code></code> with/out using filters:</p>
<pre><code> +────────────────────────────────────────────────────────+ +─────────────────────────+
│ │ │ │
│ index.gltf │ │ ocean.com/aquarium.fbx │
│ │ │ │ ├ room │
│ ├── ◻ canvas │ │ └── ◻ fishbowl │
│ │ └ src: painting.png │ │ ├─ ◻ bass │
│ │ │ │ └─ ◻ tuna │
│ ├── ◻ aquariumcube │ │ │
│ │ └ src: ://rescue.com/fish.gltf#fishbowl │ +─────────────────────────+
│ │ │
│ ├── ◻ bedroom │
│ │ └ src: #canvas │
│ │ │
│ └── ◻ livingroom │
│ └ src: #canvas │
│ │
+────────────────────────────────────────────────────────+
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>fishbowl</code> (and <code>bass</code> and <code>tuna</code>) will be instanced inside <code>aquariumcube</code>.<br>
Resizing will be happen accordingly to its placeholder object <code>aquariumcube</code>, see chapter Scaling.<br></p>
<blockquote>
<p>Instead of cherrypicking a rootobject <code>#fishbowl</code> with <code>src</code>, additional filters can be used to include/exclude certain objects. See next chapter on filtering below.</p>
</blockquote>
<p><strong>Specification</strong>:</p>
<ol>
<li>local/remote content is instanced by the <code>src</code> (filter) value (and attaches it to the placeholder mesh containing the <code>src</code> property)</li>
<li>by default all objects are loaded into the instanced src (scene) object (but not shown yet)</li>
<li><b>local</b> <code>src</code> values (<code>#...</code> e.g.) starting with a non-negating filter (<code>#cube</code> e.g.) will (deep)reparent that object (with name <code>cube</code>) as the new root of the scene at position 0,0,0</li>
<li><b>local</b> <code>src</code> values should respect (negative) filters (<code>#-foo&amp;price=&gt;3</code>)</li>
<li>the instanced scene (from a <code>src</code> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an &lsquo;empty&rsquo;-object in blender e.g.). For more info see Chapter Scaling.</li>
<li><b>external</b> <code>src</code> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li>
<li><code>src</code> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <a href="#links">broken links</a>)</li>
<li><b>external</b> <code>src</code> values should respect the fallback link mechanism (see <a href="#broken-links">broken links</a></li>
<li>when the placeholder object is a 2D plane, but the mimetype is 3D, then render the spatial content on that plane via a stencil buffer.</li>
<li>src-values are non-recursive: when linking to an external object (<code>src: foo.fbx#bar</code>), then <code>src</code>-metadata on object <code>bar</code> should be ignored.</li>
<li>an external <code>src</code>-value should always allow a sourceportation icon within 3 meter: teleporting to the origin URI to which the object belongs.</li>
<li>when only one object was cherrypicked (<code>#cube</code> e.g.), set its position to <code>0,0,0</code></li>
<li>when the enduser clicks an href with <code>#t=1,0,0</code> (play) will be applied to all src mediacontent with a timeline (mp4/mp3 e.g.)</li>
<li>a non-euclidian portal can be rendered for flat 3D objects (using stencil buffer e.g.) in case ofspatial <code>src</code>-values (an object <code>#world3</code> or URL <code>world3.fbx</code> e.g.).</li>
</ol>
<ul>
<li><code>model/gltf-binary</code></li>
<li><code>model/gltf+json</code></li>
<li><code>image/png</code></li>
<li><code>image/jpg</code></li>
<li><code>text/plain;charset=utf-8</code></li>
</ul>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">» example implementation</a><br>
<a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/src.gltf#L192">» example 3D asset</a><br>
<a href="https://github.com/coderofsalvation/xrfragment/issues/4">» discussion</a><br></p>
<h1 id="navigating-content-href-portals">Navigating content href portals</h1>
<p>navigation, portals &amp; mutations</p>
<table>
<thead>
<tr>
<th>fragment</th>
<th>type</th>
<th>example value</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>href</code></td>
<td>string (uri or predefined view)</td>
<td><code>#pos=1,1,0</code><br><code>#pos=1,1,0&amp;rot=90,0,0</code><br><code>://somefile.gltf#pos=1,1,0</code><br></td>
</tr>
</tbody>
</table>
<ol>
<li><p>clicking an outbound &ldquo;external&rdquo;- or &ldquo;file URI&rdquo; fully replaces the current scene and assumes <code>pos=0,0,0&amp;rot=0,0,0</code> by default (unless specified)</p></li>
<li><p>relocation/reorientation should happen locally for local URI&rsquo;s (<code>#pos=....</code>)</p></li>
<li><p>navigation should not happen &ldquo;immediately&rdquo; when user is more than 5 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</p></li>
<li><p>URL navigation should always be reflected in the client URL-bar (in case of javascript: see [<a href="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</a> for an example navigator), and only update the URL-bar after the scene (default fragment <code>#</code>) has been loaded.</p></li>
<li><p>In immersive XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<a href="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</a> for an example wearable)</p></li>
<li><p>make sure that the &ldquo;back-button&rdquo; of the &ldquo;browser-history&rdquo; always refers to the previous position (see [<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</a>)</p></li>
<li><p>ignore previous rule in special cases, like clicking an <code>href</code> using camera-portal collision (the back-button could cause a teleport-loop if the previous position is too close)</p></li>
<li><p>href-events should bubble upward the node-tree (from children to ancestors, so that ancestors can also contain an href), however only 1 href can be executed at the same time.</p></li>
<li><p>the end-user navigator back/forward buttons should repeat a back/forward action until a <code>pos=...</code> primitive is found (the stateless xrf:// href-values should not be pushed to the url-history)</p></li>
</ol>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js">» example implementation</a><br>
<a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/href.gltf#L192">» example 3D asset</a><br>
<a href="https://github.com/coderofsalvation/xrfragment/issues/1">» discussion</a><br></p>
<h2 id="walking-surfaces">Walking surfaces</h2>
<p>XR Fragment-compatible viewers can infer this data based scanning the scene for:</p>
<ol>
<li>materialless (nameless &amp; textureless) mesh-objects (without <code>src</code> and <code>href</code>)</li>
</ol>
<blockquote>
<p>optionally the viewer can offer thumbstick, mouse or joystick teleport-tools for non-roomscale VR/AR setups.</p>
</blockquote>
<h2 id="ux-spec">UX spec</h2>
<p>End-users should always have read/write access to:</p>
<ol>
<li>the current (toplevel) <b>URL</b> (an URLbar etc)</li>
<li>URL-history (a <b>back/forward</b> button e.g.)</li>
<li>Clicking/Touching an <code>href</code> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li>
</ol>
<h2 id="scaling-instanced-content">Scaling instanced content</h2>
<p>Sometimes embedded properties (like <code>src</code>) instance new objects.<br>
But what about their scale?<br>
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br></p>
<blockquote>
<p>Rule of thumb: visible placeholder objects act as a &lsquo;3D canvas&rsquo; for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</p>
</blockquote>
<ol>
<li><b>IF</b> an embedded property (<code>src</code> e.g.) is set on an non-empty placeholder object (geometry of &gt;2 vertices):</li>
</ol>
<ul>
<li>calculate the <b>bounding box</b> of the &ldquo;placeholder&rdquo; object (maxsize=1.4 e.g.)</li>
<li>hide the &ldquo;placeholder&rdquo; object (material e.g.)</li>
<li>instance the <code>src</code> scene as a child of the existing object</li>
<li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li>
</ul>
<blockquote>
<p>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</p>
</blockquote>
<ol start="2">
<li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li>
</ol>
<blockquote>
<p>TODO: needs intermediate visuals to make things more obvious</p>
</blockquote>
<h1 id="xr-fragment-pos">XR Fragment: pos</h1>
<p>[[» example implementation|<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]</a>]<br></p>
<h1 id="xr-fragment-rot">XR Fragment: rot</h1>
<p>[[» example implementation|<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]</a>]<br></p>
<h1 id="xr-fragment-t">XR Fragment: t</h1>
<p>[[» example implementation|<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]</a>]<br></p>
<h1 id="xr-audio-video-integration">XR audio/video integration</h1>
<p>To play global audio/video items:</p>
<ol>
<li>add a <code>src: foo.mp3</code> or <code>src: bar.mp4</code> metadata to a 3D object (<code>cube</code> e.g.)</li>
<li>to enable auto-play and global timeline ([[#t=|t]]) control: hardcode a [[#t=|t]] XR Fragment: (<code>src: bar.mp3#t=0&amp;loop</code> e.g.)</li>
<li>to play it, add <code>href: #cube</code> somewhere else</li>
<li>to enable enduser-triggered play, use a [[URI Template]] XR Fragment: (<code>src: bar.mp3#{player}</code> and <code>play: t=0&amp;loop</code> and <code>href: xrf://#player=play</code> e.g.)</li>
<li>when the enduser clicks the <code>href</code>, <code>#t=0&amp;loop</code> (play) will be applied to the <code>src</code> value</li>
</ol>
<blockquote>
<p>NOTE: hardcoded framestart/framestop uses sampleRate/fps of embedded audio/video, otherwise the global fps applies. For more info see [[#t|t]].</p>
</blockquote>
<h1 id="xr-fragment-filters">XR Fragment filters</h1>
<p>Include, exclude, hide/shows objects using space-separated strings:</p>
<table>
<thead>
<tr>
<th>example</th>
<th>outcome</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>#-sky</code></td>
<td>show everything except object named <code>sky</code></td>
</tr>
<tr>
<td><code>#-language&amp;english</code></td>
<td>hide everything with tag <code>language</code>, but show all tag <code>english</code> objects</td>
</tr>
<tr>
<td><code>#-price&amp;price=&gt;10</code></td>
<td>hide all objects with property <code>price</code>, then only show object with price above 10</td>
</tr>
<tr>
<td><code>#-house*</code></td>
<td>hide <code>house</code> object and everything inside (=<code>*</code>)</td>
</tr>
</tbody>
</table>
<p>It&rsquo;s simple but powerful syntax which allows filtering the scene using searchengine prompt-style feeling:</p>
<ol>
<li>filters are a way to traverse a scene, and filter objects based on their name, tag- or property-values.</li>
</ol>
<ul>
<li>see <a href="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</a> which used a dedicated <code>q=</code> variable (now deprecated and usable directly)</li>
</ul>
<h2 id="including-excluding">including/excluding</h2>
<p>By default, selectors work like photoshop-layers: they scan for matching layer(name/properties) within the scene-graph.
Each matched object (not their children) will be toggled (in)visible when selecting.</p>
<table>
<thead>
<tr>
<th>operator</th>
<th>info</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>-</code></td>
<td>hides object(s) (<code>#-myobject&amp;-objects</code> e.g.</td>
</tr>
<tr>
<td><code>=</code></td>
<td>indicates an object-embedded custom property key/value (<code>#price=4&amp;category=foo</code> e.g.)</td>
</tr>
<tr>
<td><code>=&gt;</code> <code>=&lt;</code></td>
<td>compare float or int number (<code>#price=&gt;4</code> e.g.)</td>
</tr>
<tr>
<td><code>*</code></td>
<td>deepselect: automatically select children of selected object, including local (nonremote) embedded objects (starting with <code>#</code>)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>NOTE 1: after an external embedded object has been instanced (<code>src: https://y.com/bar.fbx#room</code> e.g.), filters do not affect them anymore (reason: local tag/name collisions can be mitigated easily, but not in case of remote content).</p>
<p>NOTE 2: depending on the used 3D framework, toggling objects (in)visible should happen by enabling/disableing writing to the colorbuffer (to allow children being still visible while their parents are invisible).</p>
</blockquote>
<p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a>
<a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/filter.gltf#L192">» example 3D asset</a>
<a href="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</a></p>
<h2 id="filter-parser">Filter Parser</h2>
<p>Here&rsquo;s how to write a filter parser:</p>
<ol>
<li>create an associative array/object to store filter-arguments as objects</li>
<li>detect object id&rsquo;s &amp; properties <code>foo=1</code> and <code>foo</code> (reference regex= <code>~/^.*=[&gt;&lt;=]?/</code> )</li>
<li>detect excluders like <code>-foo</code>,<code>-foo=1</code>,<code>-.foo</code>,<code>-/foo</code> (reference regex= <code>/^-/</code> )</li>
<li>detect root selectors like <code>/foo</code> (reference regex= <code>/^[-]?\//</code> )</li>
<li>detect number values like <code>foo=1</code> (reference regex= <code>/^[0-9\.]+$/</code> )</li>
<li>detect operators so you can easily strip keys (reference regex= <code>/(^-|\*$)/</code> )</li>
<li>detect exclude keys like <code>-foo</code> (reference regex= <code>/^-/</code> )</li>
<li>for every filter token split string on <code>=</code></li>
<li>and we set <code>root</code> to <code>true</code> or <code>false</code> (true=<code>/</code> root selector is present)</li>
<li>therefore we we set <code>show</code> to <code>true</code> or <code>false</code> (false=excluder <code>-</code>)</li>
</ol>
<blockquote>
<p>An example filter-parser (which compiles to many languages) can be <a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Filter.hx">found here</a></p>
</blockquote>
<h1 id="visible-links">Visible links</h1>
<p>When predefined views, XRWG fragments and ID fragments (<code>#cube</code> or <code>#mytag</code> e.g.) are triggered by the enduser (via toplevel URL or clicking <code>href</code>):</p>
<ol>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <code>tag</code> value</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <code>src</code> or <code>href</code> value</li>
</ol>
<p>The obvious approach for this, is to consult the XRWG (<a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), which basically has all these things already collected/organized for you during scene-load.</p>
<p><strong>UX</strong></p>
<ol start="4">
<li>do not update the wires when the enduser moves, leave them as is</li>
<li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li>
</ol>
<h1 id="text-in-xr-tagging-linking-to-spatial-objects">Text in XR (tagging,linking to spatial objects)</h1>
<p>How does XR Fragments interlink text with objects?</p>
<blockquote>
<p>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong> <a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), augmented by Bib(s)Tex.</p>
</blockquote>
<p>Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br>
Perhaps the following question is related: why is HTML adopted less in games outside the browser?</p>
<p>Hence:</p>
<ol>
<li>XR Fragments promotes (de)serializing a scene to a (lowercase) XRWG (<a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>)</li>
<li>XR Fragments primes the XRWG, by collecting words from the <code>tag</code> and name-property of 3D objects.</li>
<li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype &amp; Data URI)</li>
<li>XR Fragments primes the XRWG, by collecting tags/id&rsquo;s from linked hypermedia (URI fragments for HTML e.g.)</li>
<li>The XRWG should be recalculated when textvalues (in <code>src</code>) change</li>
<li>HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer, or as embedded src content)</li>
<li>Applications don&rsquo;t have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li>
<li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li>
<li>Instead of exact lowercase word-matching, levensteihn-distance-based matching is preferred</li>
</ol>
<p>Example of generating XRWG out of the XRWG and textdata with hashtags:</p>
<pre><code> http://y.io/z.fbx | Derived XRWG (expressed as JSON)
----------------------------------------------------------------------------+--------------------------------------
| Chapter: ['#mydoc']
+-[src: data:.....]----------------------+ +-[3D mesh]-+ | one: ['#mydoc']
| Chapter one | | / \ | | houses: ['#castle','#mydoc','#house']
| | | / \ | | baroque: ['#mydoc','#castle']
| John built houses in baroque style. | | / \ | | castle: ['#baroque','#house']
| | | |_____| | | john: ['#john','#mydoc']
| | +-----│-----+ | mydoc: ['#mydoc']
| | │ |
| | ├─ name: castle |
| | └─ tag: house baroque |
+----------------------------------------+ |
└─ name: mydoc [3D mesh-+ |
| O ├─ name: john |
| /|\ | |
| / \ | | ^ ^ ^
+--------+ | | | |
|
[remotestorage.io]+ [ localstorage]-+ | &lt;- the XR Fragment-compatible
| XRWG (JSON) | | XRWG (JSON | | &lt;- 3D hypermedia viewer should
| | | | | &lt;- be able to select the active XRWG
+-----------------+ +---------------+ |
</code></pre>
<p>This allows hasslefree authoring and copy-paste of associations <strong>for and by humans</strong>, but also makes these URLs possible:</p>
<table>
<thead>
<tr>
<th>URL example</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>https://my.com/foo.gltf#baroque</code></td>
<td>draws lines between 3D mesh <code>castle</code>, and <code>mydoc</code>&rsquo;s text <code>baroque</code></td>
</tr>
<tr>
<td><code>https://my.com/foo.gltf#john</code></td>
<td>draws lines between mesh <code>john</code>, and the text <code>John</code> of <code>mydoc</code></td>
</tr>
<tr>
<td><code>https://my.com/foo.gltf#house</code></td>
<td>draws lines between mesh <code>castle</code>, and other objects with tag <code>house</code> or <code>todo</code></td>
</tr>
</tbody>
</table>
<blockquote>
<p>the URI fragment <code>#john&amp;mydoc&amp;house</code> would draw a connection between these 3 meshes.</p>
</blockquote>
<p>The XRWG allows endusers to show/hide relationships in realtime in XR Browsers at various levels:</p>
<ul>
<li>wordmatch <strong>inside</strong> <code>src</code> text</li>
<li>wordmatch <strong>inside</strong> <code>href</code> text</li>
<li>wordmatch object-names</li>
<li>wordmatch object-tagnames</li>
</ul>
<p>Spatial wires can be rendered between words/objects etc.<br>
Some pointers for good UX (but not necessary to be XR Fragment compatible):</p>
<ol start="9">
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<li>respect multi-line BiBTeX metadata in text because of <a href="#core-principle">the core principle</a></li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <a href="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <a href="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li>
</ol>
<h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2>
<p>The <code>src</code>-values work as expected (respecting mime-types), however:</p>
<p>The XR Fragment specification advices to bump the traditional default browser-mimetype</p>
<p><code>text/plain;charset=US-ASCII</code></p>
<p>to a hashtag-friendly one:</p>
<p><code>text/plain;charset=utf-8;hashtag</code></p>
<p>This indicates that:</p>
<ul>
<li>utf-8 is supported by default</li>
<li>words beginning with <code>#</code> (hashtags) will prime the XRWG by adding the hashtag to the XRWG, linking to the current sentence/paragraph/alltext (depending on &lsquo;.&rsquo;) to the XRWG</li>
</ul>
<p>Advantages:</p>
<ul>
<li>out-of-the-box (de)multiplex human text and metadata in one go (see <a href="#core-principle">the core principle</a>)</li>
<li>no network-overhead for metadata (see <a href="#core-principle">the core principle</a>)</li>
<li>ensuring high FPS: realtime HTML/RDF historically is too &lsquo;requesty&rsquo;/&lsquo;parsy&rsquo; for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <a href="#core-principle">the core principle</a>)</li>
<li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
</ul>
<blockquote>
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
</blockquote>
<p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br></p>
<h2 id="url-and-data-uri">URL and Data URI</h2>
<pre><code> +--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello #friends |
| │ └ src: ://author.com/article.txt | | |
| │ | +------------------------+
| └── ◻ note_canvas |
| └ src:`data:welcome human\n@book{sunday...}` |
| |
| |
+--------------------------------------------------------------+
</code></pre>
<p>The enduser will only see <code>welcome human</code> and <code>Hello friends</code> rendered verbatim (see mimetype).
The beauty is that text in Data URI automatically promotes rich copy-paste (retaining metadata).
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name &lsquo;_canvas&rsquo;).
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</p>
<blockquote>
<p>additional tagging using <a href="https://github.com/coderofsalvation/hashtagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with &lsquo;todo&rsquo;, the enduser can type or speak <code>#note_canvas@todo</code></p>
</blockquote>
<h1 id="importing-exporting">Importing/exporting</h1>
<p>For usecases like importing/exporting/p2p casting a scene, the issue of external files comes into play.</p>
<ol>
<li>export: if the 3D scene contains relative src/href values, rewrite them into absolute URL values.</li>
</ol>
<h1 id="reflection-mapping">Reflection Mapping</h1>
<p>Environment mapping is crucial for creating realistic reflections and lighting effects on 3D objects.
To apply environment mapping efficiently in a 3D scene, traverse the scene graph and assign each object&rsquo;s environment map based on the nearest ancestor&rsquo;s texture map. This ensures that objects inherit the correct environment mapping from their closest parent with a texture, enhancing the visual consistency and realism.</p>
<pre><code> +--------------------------------+
| |
| index.usdz |
| │ |
| └── ◻ sphere (texture:foo) |
| └ ◻ cube (texture:bar) | envMap = foo
| └ ◻ cylinder | envMap = bar
+--------------------------------+
</code></pre>
<p>Most 3D viewers apply one and the same environment map for various models, however this logic
allows a more natural &amp; automatic strategy for reflection mapping:</p>
<ol>
<li>traverse the scene graph depth-first</li>
<li>remember the most recent parentnode (P) with a texture material</li>
<li>for every non-root node with a texture material
3.1 clone that material (as materials might be shared across objects)
3.2 set the environmentmap to the last known parent texture (P)</li>
</ol>
<h1 id="transclusion-broken-link-resolution">Transclusion (broken link) resolution</h1>
<p>In spirit of Ted Nelson&rsquo;s &lsquo;transclusion resolution&rsquo;, there&rsquo;s a soft-mechanism to harden links &amp; minimize broken links in various ways:</p>
<ol>
<li>defining a different transport protocol (https vs ipfs or DAT) in <code>src</code> or <code>href</code> values can make a difference</li>
<li>mirroring files on another protocol using (HTTP) errorcode tags in <code>src</code> or <code>href</code> properties</li>
<li>in case of <code>src</code>: nesting a copy of the embedded object in the placeholder object (<code>embeddedObject</code>) will not be replaced when the request fails</li>
</ol>
<blockquote>
<p>due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)</p>
</blockquote>
<p>For example:</p>
<pre><code> +────────────────────────────────────────────────────────+
│ │
│ index.gltf │
│ │ │
│ │ #: #-offlinetext │
│ │ │
│ ├── ◻ buttonA │
│ │ └ href: http://foo.io/campagne.fbx │
│ │ └ href@404: ipfs://foo.io/campagne.fbx │
│ │ └ href@400: #clienterrortext │
│ │ └ ◻ offlinetext │
│ │ │
│ └── ◻ embeddedObject &lt;--------- the meshdata inside embeddedObject will (not)
│ └ src: https://foo.io/bar.gltf │ be flushed when the request (does not) succeed.
│ └ src@404: http://foo.io/bar.gltf │ So worstcase the 3D data (of the time of publishing index.gltf)
│ └ src@400: https://archive.org/l2kj43.gltf │ will be displayed.
│ │
+────────────────────────────────────────────────────────+
</code></pre>
<h1 id="topic-based-index-less-webrings">Topic-based index-less Webrings</h1>
<p>As hashtags in URLs map to the XWRG, <code>href</code>-values can be used to promote topic-based index-less webrings.<br>
Consider 3D scenes linking to eachother using these <code>href</code> values:</p>
<ul>
<li><code>href: schoolA.edu/projects.gltf#math</code></li>
<li><code>href: schoolB.edu/projects.gltf#math</code></li>
<li><code>href: university.edu/projects.gltf#math</code></li>
</ul>
<p>These links would all show visible links to math-tagged objects in the scene.<br>
To filter out non-related objects one could take it a step further using filters:</p>
<ul>
<li><code>href: schoolA.edu/projects.gltf#math&amp;-topics math</code></li>
<li><code>href: schoolB.edu/projects.gltf#math&amp;-courses math</code></li>
<li><code>href: university.edu/projects.gltf#math&amp;-theme math</code></li>
</ul>
<blockquote>
<p>This would hide all object tagged with <code>topic</code>, <code>courses</code> or <code>theme</code> (including math) so that later only objects tagged with <code>math</code> will be visible</p>
</blockquote>
<p>This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.</p>
<h1 id="uri-templates-rfc6570">URI Templates (RFC6570)</h1>
<p>XR Fragments adopts Level1 URI <strong>Fragment</strong> expansion to provide safe interactivity.<br>
The following demonstrates a simple video player:</p>
<pre><code>
+─────────────────────────────────────────────+
│ │
│ foo.usdz │
│ │ │
│ │ │
│ ├── ◻ stopbutton │
│ │ ├ #: #-stopbutton │
│ │ └ href: #player=stop&amp;-stopbutton │ (stop and hide stop-button)
│ │ │
│ └── ◻ plane │
│ ├ play: #t=l:0,10 │
│ ├ stop: #t=0,0 │
│ ├ href: #player=play&amp;stopbutton │ (play and show stop-button)
│ └ src: cat.mp4#{player} │
│ │
│ │
+─────────────────────────────────────────────+
</code></pre>
<h1 id="additional-scene-metadata">Additional scene metadata</h1>
<p>XR Fragments does not aim to redefine the metadata-space or accessibility-space by introducing its own cataloging-metadata fields.
Instead, it encourages browsers to scan nodes for the following custom properties:</p>
<ul>
<li><a href="https://spdx.dev/">SPDX</a> license information</li>
<li><a href="https://www.w3.org/WAI/standards-guidelines/aria/">ARIA</a> attributes (<code>aria-*: .....</code>)</li>
<li><a href="https://ogp.me">Open Graph</a> attributes (<code>og:*: .....</code>)</li>
<li><a href="https://www.dublincore.org/specifications/dublin-core/application-profile-guidelines/">Dublin-Core</a> attributes(<code>dc:*: .....</code>)</li>
<li><a href="https://bibtex.eu/fields">BibTex</a> when known bibtex-keys exist with values enclosed in <code>{</code> and <code>},</code></li>
</ul>
<p><strong>ARIA</strong> (<code>aria-description</code>) is the most important to support, as it promotes accessibility and allows scene transcripts. Please start <code>aria-description</code> with a verb to aid transcripts.</p>
<blockquote>
<p>Example: object &lsquo;tryceratops&rsquo; with <code>aria-description: is a huge dinosaurus standing on a #mountain</code> generates transcript <code>#tryceratops is a huge dinosaurus standing on a #mountain</code>, where the hashtags are clickable XR Fragments (activating the visible-links in the XR browser).</p>
</blockquote>
<p>Individual nodes can be enriched with such metadata, but most importantly the scene node:</p>
<table>
<thead>
<tr>
<th>metadata key</th>
<th>example value</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>aria-description</code>, <code>og:description</code>, <code>dc:description</code></td>
<td><code>An immersive experience about Triceratops</code> (*)</td>
</tr>
<tr>
<td><code>SPDX</code></td>
<td><code>CC0-1.0</code></td>
</tr>
<tr>
<td><code>dc:creator</code></td>
<td><code>John Doe</code></td>
</tr>
<tr>
<td><code>dc:title</code>, <code>og:title</code></td>
<td>&lsquo;Triceratops` (*)</td>
</tr>
<tr>
<td><code>og:site_name</code></td>
<td><code>https://xrfragment.org</code></td>
</tr>
<tr>
<td><code>dc.publisher</code></td>
<td><code>NLNET</code></td>
</tr>
<tr>
<td><code>dc.date</code></td>
<td><code>2024-01-01</code></td>
</tr>
<tr>
<td><code>dc.identifier</code></td>
<td><code>XRFRAGMENT-001</code></td>
</tr>
<tr>
<td><code>journal</code> (bibTeX)</td>
<td><code>{Future Of Text Vol 3},</code></td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = these are interchangable (only one needs to be defined)</p>
</blockquote>
<p>There&rsquo;s no silver bullet when it comes to metadata, so one should support where the metadata is/goes.</p>
<blockquote>
<p>These attributes can be scanned and presented during an <code>href</code> or <code>src</code> eye/mouse-over.</p>
</blockquote>
<h1 id="accessibility-interface">Accessibility interface</h1>
<p>The addressibility of XR Fragments allows for unique 3D-to-text transcripts, as well as an textual interface to navigate 3D content.<br>
Spec:<br><Br></p>
<ol>
<li>The enduser must be able to enable an accessibility-mode (which persists across application/webpage restarts)</li>
<li>Accessibility-mode must contain a text-input for the user to enter text</li>
<li>Accessibility-mode must contain a flexible textlog for the user to read (via screenreader, screen, or TTS e.g.)</li>
<li>the textlog contains <code>aria-descriptions</code>, and its narration (Screenreader e.g.) can be skipped (via 2-button navigation)</li>
<li>The <code>back</code> command should navigate back to the previous URL (alias for browser-backbutton)</li>
<li>The <code>forward</code> command should navigate back to the next URL (alias for browser-nextbutton)</li>
<li>A destination is a 3D node containing an <code>href</code> with a <code>pos=</code> XR fragment</li>
<li>The <code>go</code> command should list all possible destinations</li>
<li>The <code>go left</code> command should move the camera around 0.3 meters to the left</li>
<li>The <code>go right</code> command should move the camera around 0.3 meters to the right</li>
<li>The <code>go forward</code> command should move the camera 0.3 meters forward (direction of current rotation).</li>
<li>The <code>rotate left</code> command should rotate the camera 0.3 to the left</li>
<li>The <code>rotate left</code> command should rotate the camera 0.3 to the right</li>
<li>The (dynamic) <code>go abc</code> command should navigate to <code>#pos=scene2</code> in case there&rsquo;s a 3D node with name <code>abc</code> and <code>href</code> value <code>#pos=scene2</code></li>
<li>The <code>look</code> command should give an (contextual) 3D-to-text transcript, by scanning the <code>aria-description</code> values of the current <code>pos=</code> value (including its children)</li>
<li>The <code>do</code> command should list all possible <code>href</code> values which don&rsquo;t contain an <code>pos=</code> XR Fragment</li>
<li>The (dynamic) <code>do abc</code> command should navigate/execute <code>https://.../...</code> in case a 3D node exist with name <code>abc</code> and <code>href</code> value <code>https://.../...</code></li>
</ol>
<h2 id="two-button-navigation">Two-button navigation</h2>
<p>For specific user-profiles, gyroscope/mouse/keyboard/audio/visuals will not be available.<br>
Therefore a 2-button navigation-interface is the bare minimum interface:</p>
<ol>
<li>objects with href metadata can be cycled via a key (tab on a keyboard)</li>
<li>objects with href metadata can be activated via a key (enter on a keyboard)</li>
<li>the TTS reads the href-value (and/or aria-description if available)</li>
</ol>
<h2 id="overlap-with-fileformat-specific-extensions">Overlap with fileformat-specific extensions</h2>
<p>Some 3D scene-fileformats have support for extensions.
What if the functionality of those overlap?
For example, GLTF has the <code>OMI_LINK</code> extension which might overlap with XR Fragment&rsquo;s <code>href</code>:</p>
<blockquote>
<p>Priority Order and Precedence, otherwise fallback applies</p>
</blockquote>
<p>1.<strong>Extensions Take Precedence</strong>: Since glTF-specific extensions are designed with the formats
specific needs and optimizations in mind, they should take precedence over extras metadata
in cases where both contain overlapping functionality.
This approach aligns with the idea that extensions are more likely to be interpreted uniformly by glTF-compatible software.</p>
<ol start="2">
<li><strong>Fallback Fall-through Mechanism</strong>:
If a glTF implementation does not support a particular extension, the (XRF) extras field can serve as a fallback. This way, metadata provided in extras can still be useful for applications that don&rsquo;t handle certain extensions.</li>
</ol>
<blockquote>
<p><strong>Example 1</strong> In case of the OMI_LINK glTF extension (<code>href: https://nlnet.nl</code>) and an XR Fragment (<code>href: #pos=otherroom</code> or <code>href: otherplanet.glb</code>), it is clear that <code>https://nlnet.nl</code> should open in a browsertab, whereas the XR Fragment links should teleport the user. If the OMI_LINK contains an XR Fragment (<code>#pos=</code> e.g.) a teleport should be performed only (and other [overlapping] metadata should be ignored).</p>
<p><strong>Example 2</strong> If an Extensions uses XR Fragments in URI&rsquo;s (<code>href: #pos=otherroom</code> or <code>href: xrf://-walls</code> in OMI_LINK e.g.), then perform them according to XR Fragment spec (teleport user). But only once: ignore further overlapping metadata for that usecase.</p>
</blockquote>
<h2 id="vendor-prefixes">Vendor Prefixes</h2>
<p>Vendor-specific metadata in a 3D scenefiles, are similar to vendor-specific <a href="https://en.wikipedia.org/wiki/CSS#Vendor_prefixes">CSS-prefixes</a> (<code>-moz-opacity: 0.2</code> e.g.).
This allows popular 3D engines/frameworks, to initialize specific features when loading a scene/object, in a progressive enhanced way.</p>
<p>Vendor Prefixes allows embedding 3D engines/framework-specific features a 3D file via metadata:</p>
<table>
<thead>
<tr>
<th>what</th>
<th>XR metadata</th>
<th>Lowest common denominator</th>
</tr>
</thead>
<tbody>
<tr>
<td>CSS</td>
<td>vendor-agnostic</td>
<td>2D canvas + object referencing/styling</td>
</tr>
<tr>
<td>XR Fragments</td>
<td>vendor-agnostic</td>
<td>3D camera + object(file) load/embed/click/referencing</td>
</tr>
<tr>
<td>Vendor prefixs</td>
<td>vendor-<strong>specific</strong></td>
<td>Specialized Entity-Component implementation</td>
</tr>
</tbody>
</table>
<blockquote>
<p>Why? Because not all XR interactions can/should be solved/standardized by embedding XR Fragments into any 3D file.
The lowest common denominator between 3D engines is the &lsquo;entity&rsquo;-part of their entity-component-system (ECS). The &lsquo;component&rsquo;-part can be progressively enhanced via vendor prefixes.</p>
</blockquote>
<p>For example, the following metadata can be added to a .glb file, to make an object grabbable in AFRAME:</p>
<pre><code>+────────────────────────────────────────────────────────────────────────────────────────────────────────+
│ http://y.io/z.glb | AFRAME app │
│-----------------------------------------------+--------------------------------------------------------│
│ | │
│ | after loading the glb, john can be placed into the │
│ +-[3D mesh]-+ | castle via hands, because the author added metadata to │
│ | / \ | | john via either: │
│ | / \ | | │
│ | / \ | | 1. Blender (custom property-box, no plugins needed) │
│ | |_____| | | │
│ +-----│-----+ | 2. javascript-code: │
│ │ | │
│ ├─ name: castle | for( var com in this.el.components ){ │
│ └─ tag: house baroque | this.el.object3D.userData[`-AFRAME-${com}`] = '' │
│ | } │
│ [3D mesh-+ | // save to z.glb in AFRAME inspector │
│ | ├─ name: john | │
│ | O ├─ age: 23 | │
│ | /|\ ├─ -aframe-grabbable: '' | &gt; inits 'grabbable' component on object john │
│ | / \ ├─ -aframe-material.color: '#F0A' | &gt; inits 'material' component on object john │
│ | ├─ -aframe-text.value: '{name}{age}'| &gt; inits 'text' component (*) with value 'john' │
│ | ├─ -three-material.fog: false | &gt; changes material settings in THREE.js app │
│ | ├─ -godot-Label3D.text: '{name}{age}'| &gt; inits 'Label3D' component (*) in Godot │
│ +--------+ | │
│ | │
├─ -GODOT-version: '4.3' | &gt; exporters/authors can report targeted version │
├─ -AFRAME-version: '1.6.0' | and (optionally) hint component-repo│
├─ -AFRAME-info: 'https://git.benetou.fr/comps' │
│ | │
+────────────────────────────────────────────────────────────────────────────────────────────────────────+
</code></pre>
<ul>
<li>key/value syntax: -<code>&lt;vendorname&gt;</code>-<code>&lt;component|version&gt;</code>.<code>&lt;key&gt;</code> <code>[string/boolean/float/int]</code>-value</li>
</ul>
<p>String-templatevalues are evaluated as per <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a> Level 1.</p>
<blockquote>
<p>This &lsquo;separating of mechanism from policy&rsquo; (unix rule) does <strong>somewhat</strong> break portability of an XR experience, but still prevents (E-waste of) handcoded virtual worlds. It allows for (XR experience) metadata to survive in future 3D engines and scene-fileformats.</p>
</blockquote>
<h1 id="security-considerations">Security Considerations</h1>
<p>The only dynamic parts are <a href="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a>.<br>
The use of URI Templates is limited to pre-defined variables and Level0 fragments-expansion only, which makes it quite safe.<br>
n fact, it is much safer than relying on a scripting language (javascript) which can change URN too.</p>
<h1 id="faq">FAQ</h1>
<p><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
<strong>A:</strong> Because it&rsquo;s out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <code>src</code> values)</p>
<hr>
<p><strong>Q:</strong> Why isn&rsquo;t there support for scripting, URI Template Fragments are so limited compared to WASM &amp; javascript
<strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragment uses <a href="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <a href="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a>, to prevent unhyperifying itself by hardcoupling to a particular markup or scripting language. <br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting &amp; networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with XR Fragments or hypermedia.<br>This perhaps belongs more to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to &lsquo;extend&rsquo; experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</p>
<h1 id="authors">authors</h1>
<ul>
<li>Leon van Kammen (@lvk@mastodon.online)</li>
<li>Jens Finkhäuser (@jens@social.finkhaeuser.de)</li>
</ul>
<h1 id="iana-considerations">IANA Considerations</h1>
<p>This document has no IANA actions.</p>
<h1 id="acknowledgments">Acknowledgments</h1>
<ul>
<li><a href="https://nlnet.nl">NLNET</a></li>
<li><a href="https://futureoftext.org">Future of Text</a></li>
<li><a href="https://visual-meta.info">visual-meta.info</a></li>
<li>Michiel Leenaars</li>
<li>Gerben van der Broeke</li>
<li>Mauve</li>
<li>Jens Finkhäuser</li>
<li>Marc Belmont</li>
<li>Tim Gerritsen</li>
<li>Frode Hegland</li>
<li>Brandel Zackernuk</li>
<li>Mark Anderson</li>
</ul>
<h1 id="appendix-definitions">Appendix: Definitions</h1>
<table>
<thead>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>URI</td>
<td>some resource at something somewhere via someprotocol (<code>http://me.com/foo.glb#foo</code> or <code>e76f8efec8efce98e6f</code> <a href="https://interpeer.io">see interpeer.io</a>)</td>
</tr>
<tr>
<td>URL</td>
<td>something somewhere via someprotocol (<code>http://me.com/foo.glb</code>)</td>
</tr>
<tr>
<td>URN</td>
<td>something at some domain (<code>me.com/foo.glb</code>)</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints like <code>#pos=0,0,0&amp;t=1,100</code> e.g.</td>
</tr>
<tr>
<td>the XRWG</td>
<td>wordgraph (collapses 3D scene to tags)</td>
</tr>
<tr>
<td>the hashbus</td>
<td>hashtags map to camera/scene-projections</td>
</tr>
<tr>
<td>spacetime hashtags</td>
<td>positions camera, triggers scene-preset/time</td>
</tr>
<tr>
<td>teleportation</td>
<td>repositioning the enduser to a different position (or 3D scene/file)</td>
</tr>
<tr>
<td>sourceportation</td>
<td>teleporting the enduser to the original XR Document of an <code>src</code> embedded object.</td>
</tr>
<tr>
<td>placeholder object</td>
<td>a 3D object which with src-metadata (which will be replaced by the src-data.)</td>
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>filter</td>
<td>URI Fragment(s) which show/hide object(s) in a scene based on name/tag/property (<code>#cube&amp;-price=&gt;3</code>)</td>
</tr>
<tr>
<td>visual-meta</td>
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games)</td>
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&ldquo;I feel this belongs to that&rdquo;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&ldquo;I&rsquo;m fairly sure John is a person who lives in oklahoma&rdquo;)</td>
</tr>
<tr>
<td><code></code></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
<tr>
<td>flat 3D object</td>
<td>a 3D object of which all verticies share a plane</td>
</tr>
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
<tr>
<td>(hashtag)bibs</td>
<td>an easy to speak/type/scan tagging SDL (<a href="https://github.com/coderofsalvation/hashtagbibs">see here</a> which expands to BibTex/JSON/XML</td>
</tr>
</tbody>
</table>
</section>
</body>
</html>