<!DOCTYPE html> <html> <head> <title>XR Fragments</title> <meta name="GENERATOR" content="github.com/mmarkdown/mmark Mmark Markdown Processor - mmark.miek.nl"> <meta charset="utf-8"> </head> <body> <!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml --> <style type="text/css"> body{ font-family: monospace; max-width: 1000px; font-size: 15px; padding: 0% 10%; line-height: 30px; color:#555; background:#F7F7F7; } h1 { margin-top:40px; } pre{ line-height:18px; } a,a:visited,a:active{ color: #70f; } code{ border: 1px solid #AAA; border-radius: 3px; padding: 0px 5px 2px 5px; } pre{ line-height: 18px; overflow: auto; padding: 12px; } pre + code { background:#DDD; } pre>code{ border:none; border-radius:0px; padding:0; } blockquote{ padding-left: 30px; margin: 0; border-left: 5px solid #CCC; } th { border-bottom: 1px solid #000; text-align: left; padding-right:45px; padding-left:7px; background: #DDD; } td { border-bottom: 1px solid #CCC; font-size:13px; } </style> <br> <h1>XR Fragments</h1> <br> <pre> stream: IETF area: Internet status: informational author: Leon van Kammen date: 2023-04-12T00:00:00Z workgroup: Internet Engineering Task Force value: draft-XRFRAGMENTS-leonvankammen-00 </pre> <h1 class="special" id="abstract">Abstract</h1> <p>This draft is a specification for 4D URLs & <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation, which links together space, time & text together, for hypermedia browsers with- or without a network-connection.<br> The specification promotes spatial addressibility, sharing, navigation, query-ing and annotating interactive (text)objects across for (XR) Browsers.<br> XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and BibTags notation.<br></p> <blockquote> <p>Almost every idea in this document is demonstrated at <a href="https://xrfragment.org">https://xrfragment.org</a></p> </blockquote> <section data-matter="main"> <h1 id="introduction">Introduction</h1> <p>How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br> Historically, there’s many attempts to create the ultimate markuplanguage or 3D fileformat.<br> The lowest common denominator is: describing/tagging/naming nodes using <strong>plain text</strong>.<br> XR Fragments allows us to enrich/connect existing dataformats, by introducing existing technologies/ideas:<br></p> <ol> <li>addressibility and <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li> <li>Interlinking text/& 3D by collapsing space into a Word Graph (XRWG) to show <a href="#visible-links">visible links</a> (and augmenting text with <a href="https://github.com/coderofsalvation/tagbibs">bibs</a> / <a href="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> appendices (see <a href="https://visual-meta.info">visual-meta</a> e.g.)</li> <li>unlocking spatial potential of the (originally 2D) hashtag (which jumps to a chapter) for navigating XR documents</li> </ol> <blockquote> <p>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</p> </blockquote> <h1 id="core-principle">Core principle</h1> <p>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br> This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br> XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.<br> Instead of combining them (in a game-editor e.g.), XR Fragments is opting for a more integrated path <strong>towards</strong> them, by describing how to make browsers <strong>4D URL-ready</strong>:</p> <table> <thead> <tr> <th>principle</th> <th>XR 4D URL</th> <th>HTML 2D URL</th> </tr> </thead> <tbody> <tr> <td>the XRWG</td> <td>wordgraph (collapses 3D scene to tags)</td> <td>Ctrl-F (find)</td> </tr> <tr> <td>the hashbus</td> <td>hashtags map to camera/scene-projections</td> <td>hashtags map to document positions</td> </tr> <tr> <td>spacetime hashtags</td> <td>positions camera, triggers scene-preset/time</td> <td>jumps/scrolls to chapter</td> </tr> <tr> <td>src metadata</td> <td>renders content and offers sourceportation</td> <td>renders content</td> </tr> <tr> <td>href metadata</td> <td>teleports to other XR document</td> <td>jumps to other HTML document</td> </tr> <tr> <td>href metadata</td> <td>repositions camera or animation-range</td> <td>jumps to camera</td> </tr> <tr> <td>href metadata</td> <td>draws visible connection(s) for XRWG ‘tag’</td> <td></td> </tr> <tr> <td>href metadata</td> <td>triggers predefined view</td> <td>Media fragments</td> </tr> </tbody> </table> <blockquote> <p>XR Fragments does not look at XR (or the web) thru the lens of HTML.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective:</p> </blockquote> <pre><code> +──────────────────────────────────────────────────────────────────────────────────────────────+ │ │ │ the soul of any URL: ://macro /meso ?micro #nano │ │ │ │ 2D URL: ://library.com /document ?search #chapter │ │ │ │ 4D URL: ://park.com /4Dscene.fbx ──> ?misc ──> #view ───> hashbus │ │ │ #query │ │ │ │ #tag │ │ │ │ │ │ │ XRWG <─────────────────────<────────────+ │ │ │ │ │ │ ├─ objects ───────────────>────────────│ │ │ └─ text ───────────────>────────────+ │ │ │ │ │ +──────────────────────────────────────────────────────────────────────────────────────────────+ </code></pre> <p>Traditional webbrowsers can become 4D document-ready by:</p> <ul> <li><a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> loading 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li> <li>allowing assets to publish hashtags to themselves (the scene) using the hashbus (like hashtags controlling the scrollbar).</li> <li>collapsing the 3D scene to an wordgraph (for essential navigation purposes) controllable thru a hash(tag)bus</li> </ul> <p>XR Fragments itself are <a href="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p> <h1 id="conventions-and-definitions">Conventions and Definitions</h1> <p>See appendix below in case certain terms are not clear.</p> <h2 id="xr-fragment-uri-grammar">XR Fragment URI Grammar</h2> <pre><code>reserved = gen-delims / sub-delims gen-delims = "#" / "&" sub-delims = "," / "=" </code></pre> <blockquote> <p>Example: <code>://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100</code></p> </blockquote> <table> <thead> <tr> <th>Demo</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td><code>pos=1,2,3</code></td> <td>vector/coordinate argument e.g.</td> </tr> <tr> <td><code>pos=1,2,3&rot=0,90,0&q=.foo</code></td> <td>combinators</td> </tr> </tbody> </table> <blockquote> <p>this is already implemented in all browsers</p> </blockquote> <h1 id="list-of-uri-fragments">List of URI Fragments</h1> <table> <thead> <tr> <th>fragment</th> <th>type</th> <th>example</th> <th>info</th> </tr> </thead> <tbody> <tr> <td><code>#pos</code></td> <td>vector3</td> <td><code>#pos=0.5,0,0</code></td> <td>positions camera (or XR floor) to xyz-coord 0.5,0,0,</td> </tr> <tr> <td><code>#rot</code></td> <td>vector3</td> <td><code>#rot=0,90,0</code></td> <td>rotates camera to xyz-coord 0.5,0,0</td> </tr> <tr> <td><code>#t</code></td> <td>vector3</td> <td><code>#t=1,500,1000</code></td> <td>play animation-loop range between frame 500 and 1000, at normal speed</td> </tr> <tr> <td><code>#......</code></td> <td>string</td> <td><code>#.cubes</code> <code>#cube</code></td> <td>predefined views, XRWG fragments and ID fragments</td> </tr> </tbody> </table> <blockquote> <p>xyz coordinates are similar to ones found in SVG Media Fragments</p> </blockquote> <h1 id="list-of-metadata-for-3d-nodes">List of metadata for 3D nodes</h1> <table> <thead> <tr> <th>key</th> <th>type</th> <th>example (JSON)</th> <th>function</th> <th>existing compatibility</th> </tr> </thead> <tbody> <tr> <td><code>href</code></td> <td>string</td> <td><code>"href": "b.gltf"</code></td> <td>XR teleport</td> <td>custom property in 3D fileformats</td> </tr> <tr> <td><code>src</code></td> <td>string</td> <td><code>"src": "#cube"</code></td> <td>XR embed / teleport</td> <td>custom property in 3D fileformats</td> </tr> <tr> <td><code>tag</code></td> <td>string</td> <td><code>"tag": "cubes geo"</code></td> <td>tag object (for query-use / XRWG highlighting)</td> <td>custom property in 3D fileformats</td> </tr> </tbody> </table> <p>Supported popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p> <blockquote> <p>NOTE: XR Fragments are optional but also file- and protocol-agnostic, which means that programmatic 3D scene(nodes) can also use the mechanism/metadata.</p> </blockquote> <h1 id="spatial-referencing-3d">Spatial Referencing 3D</h1> <p>XR Fragments assume the following objectname-to-URIFragment mapping:</p> <pre><code> my.io/scene.fbx +─────────────────────────────+ │ sky │ src: http://my.io/scene.fbx#sky (includes building,mainobject,floor) │ +─────────────────────────+ │ │ │ building │ │ src: http://my.io/scene.fbx#building (includes mainobject,floor) │ │ +─────────────────────+ │ │ │ │ │ mainobject │ │ │ src: http://my.io/scene.fbx#mainobject (includes floor) │ │ │ +─────────────────+ │ │ │ │ │ │ │ floor │ │ │ │ src: http://my.io/scene.fbx#floor (just floor object) │ │ │ │ │ │ │ │ │ │ │ +─────────────────+ │ │ │ │ │ +─────────────────────+ │ │ │ +─────────────────────────+ │ +─────────────────────────────+ </code></pre> <blockquote> <p>Every 3D fileformat supports named 3D object, and this name allows URLs (fragments) to reference them (and their children objects).</p> </blockquote> <p>Clever nested design of 3D scenes allow great ways for re-using content, and/or previewing scenes.<br> For example, to render a portal with a preview-version of the scene, create an 3D object with:</p> <ul> <li>href: <code>https://scene.fbx</code></li> <li>src: <code>https://otherworld.gltf#mainobject</code></li> </ul> <blockquote> <p>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <code>src</code> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</p> </blockquote> <h1 id="navigating-3d">Navigating 3D</h1> <table> <thead> <tr> <th>fragment</th> <th>type</th> <th>functionality</th> </tr> </thead> <tbody> <tr> <td><b>#pos</b>=0,0,0</td> <td>vector3</td> <td>(re)position camera</td> </tr> <tr> <td><b>#t</b>=0,100</td> <td>vector3</td> <td>set playback speed, and (re)position looprange of scene-animation or <code>src</code>-mediacontent</td> </tr> <tr> <td><b>#rot</b>=0,90,0</td> <td>vector3</td> <td>rotate camera</td> </tr> </tbody> </table> <p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js">» example implementation</a><br> <a href="https://github.com/coderofsalvation/xrfragment/issues/5">» discussion</a><br></p> <ol> <li>the Y-coordinate of <code>pos</code> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).</li> <li>set the position of the camera accordingly to the vector3 values of <code>#pos</code></li> <li><code>rot</code> sets the rotation of the camera (only for non-VR/AR headsets)</li> <li><code>t</code> sets the playbackspeed and animation-range of the current scene animation(s) or <code>src</code>-mediacontent (video/audioframes e.g., use <code>t=0,7,7</code> to ‘STOP’ at frame 7 e.g.)</li> <li>in case an <code>href</code> does not mention any <code>pos</code>-coordinate, <code>pos=0,0,0</code> will be assumed</li> </ol> <p>Here’s an ascii representation of a 3D scene-graph which contains 3D objects <code>◻</code> and their metadata:</p> <pre><code> +────────────────────────────────────────────────────────+ │ │ │ index.gltf │ │ │ │ │ ├── ◻ buttonA │ │ │ └ href: #pos=1,0,1&t=100,200 │ │ │ │ │ └── ◻ buttonB │ │ └ href: other.fbx │ <── file─agnostic (can be .gltf .obj etc) │ │ +────────────────────────────────────────────────────────+ </code></pre> <p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br> In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will <strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>, and assume <code>pos=0,0,0</code>.</p> <h1 id="top-level-url-processing">Top-level URL processing</h1> <blockquote> <p>Example URL: <code>://foo/world.gltf#cube&pos=0,0,0</code></p> </blockquote> <p>The URL-processing-flow for hypermedia browsers goes like this:</p> <ol> <li>IF a <code>#cube</code> matches a custom property-key (of an object) in the 3D file/scene (<code>#cube</code>: <code>#......</code>) <b>THEN</b> execute that predefined_view.</li> <li>IF scene operators (<code>pos</code>) and/or animation operator (<code>t</code>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li> <li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <code>#pos=0,0,0</code> (<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</a>)</li> <li>IF a <code>#cube</code> matches the name (of an object) in the 3D file/scene then draw a line from the enduser(’s heart) to that object (to highlight it).</li> <li>IF a <code>#cube</code> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li> </ol> <h1 id="embedding-xr-content-src-instancing">Embedding XR content (src-instancing)</h1> <p><code>src</code> is the 3D version of the <a target="_blank" href="https://www.w3.org/html/wiki/Elements/iframe">iframe</a>.<br> It instances content (in objects) in the current scene/asset.</p> <table> <thead> <tr> <th>fragment</th> <th>type</th> <th>example value</th> </tr> </thead> <tbody> <tr> <td><code>src</code></td> <td>string (uri, hashtag/query)</td> <td><code>#cube</code><br><code>#sometag</code><br>#q=-ball_inside_cube<code><br></code>#q=-/sky -rain<code><br></code>#q=-.language .english<code><br></code>#q=price:>2 price:<5`<br><code>https://linux.org/penguin.png</code><br><code>https://linux.world/distrowatch.gltf#t=1,100</code><br><code>linuxapp://conference/nixworkshop/apply.gltf#q=flyer</code><br><code>androidapp://page1?tutorial#pos=0,0,1&t1,100</code></td> </tr> </tbody> </table> <p>Here’s an ascii representation of a 3D scene-graph with 3D objects <code>◻</code> which embeds remote & local 3D objects <code>◻</code> with/out using queries:</p> <pre><code> +────────────────────────────────────────────────────────+ +─────────────────────────+ │ │ │ │ │ index.gltf │ │ ocean.com/aquarium.fbx │ │ │ │ │ │ │ │ ├── ◻ canvas │ │ └── ◻ fishbowl │ │ │ └ src: painting.png │ │ ├─ ◻ bass │ │ │ │ │ └─ ◻ tuna │ │ ├── ◻ aquariumcube │ │ │ │ │ └ src: ://rescue.com/fish.gltf#bass%20tuna │ +─────────────────────────+ │ │ │ │ ├── ◻ bedroom │ │ │ └ src: #canvas │ │ │ │ │ └── ◻ livingroom │ │ └ src: #canvas │ │ │ +────────────────────────────────────────────────────────+ </code></pre> <p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br> Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br> Resizing will be happen accordingly to its placeholder object <code>aquariumcube</code>, see chapter Scaling.<br></p> <blockquote> <p>Instead of cherrypicking objects with <code>#bass&tuna</code> thru <code>src</code>, queries can be used to import the whole scene (and filter out certain objects). See next chapter below.</p> </blockquote> <p><strong>Specification</strong>:</p> <ol> <li>local/remote content is instanced by the <code>src</code> (query) value (and attaches it to the placeholder mesh containing the <code>src</code> property)</li> <li><b>local</b> <code>src</code> values (URL <strong>starting</strong> with <code>#</code>, like <code>#cube&foo</code>) means <strong>only</strong> the mentioned objectnames will be copied to the instanced scene (from the current scene) while preserving their names (to support recursive selectors). <a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">(example code)</a></li> <li><b>local</b> <code>src</code> values indicating a query (<code>#q=</code>), means that all included objects (from the current scene) will be copied to the instanced scene (before applying the query) while preserving their names (to support recursive selectors). <a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">(example code)</a></li> <li>the instanced scene (from a <code>src</code> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an ‘empty’-object in blender e.g.). For more info see Chapter Scaling.</li> <li><b>external</b> <code>src</code> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li> <li><code>src</code> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <a href="#links">broken links</a>)</li> <li><b>external</b> <code>src</code> values should respect the fallback link mechanism (see <a href="#broken-links">broken links</a></li> <li>when the placeholder object is a 2D plane, but the mimetype is 3D, then render the spatial content on that plane via a stencil buffer.</li> <li>src-values are non-recursive: when linking to an external object (<code>src: foo.fbx#bar</code>), then <code>src</code>-metadata on object <code>bar</code> should be ignored.</li> <li>clicking on external <code>src</code>-values always allow sourceportation: teleporting to the origin URI to which the object belongs.</li> <li>when only one object was cherrypicked (<code>#cube</code> e.g.), set its position to <code>0,0,0</code></li> </ol> <ul> <li><code>model/gltf+json</code></li> <li><code>image/png</code></li> <li><code>image/jpg</code></li> <li><code>text/plain;charset=utf-8;bib=^@</code></li> </ul> <p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">» example implementation</a><br> <a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/src.gltf#L192">» example 3D asset</a><br> <a href="https://github.com/coderofsalvation/xrfragment/issues/4">» discussion</a><br></p> <h1 id="navigating-content-internal-outbound-href-portals">Navigating content (internal/outbound href portals)</h1> <p>navigation, portals & mutations</p> <table> <thead> <tr> <th>fragment</th> <th>type</th> <th>example value</th> </tr> </thead> <tbody> <tr> <td><code>href</code></td> <td>string (uri or predefined view)</td> <td><code>#pos=1,1,0</code><br><code>#pos=1,1,0&rot=90,0,0</code><br><code>://somefile.gltf#pos=1,1,0</code><br></td> </tr> </tbody> </table> <ol> <li><p>clicking an outbound “external”- or “file URI” fully replaces the current scene and assumes <code>pos=0,0,0&rot=0,0,0</code> by default (unless specified)</p></li> <li><p>relocation/reorientation should happen locally for local URI’s (<code>#pos=....</code>)</p></li> <li><p>navigation should not happen “immediately” when user is more than 2 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</p></li> <li><p>URL navigation should always be reflected in the client (in case of javascript: see [<a href="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</a> for an example navigator).</p></li> <li><p>In XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<a href="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</a> for an example wearable)</p></li> <li><p>in case of navigating to a new [[pos)ition, “first” navigate to the “current position” so that the “back-button” of the “browser-history” always refers to the previous position (see [<a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</a>)</p></li> <li><p>portal-rendering: a 2:1 ratio texture-material indicates an equirectangular projection</p></li> </ol> <p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js">» example implementation</a><br> <a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/href.gltf#L192">» example 3D asset</a><br> <a href="https://github.com/coderofsalvation/xrfragment/issues/1">» discussion</a><br></p> <h2 id="ux-spec">UX spec</h2> <p>End-users should always have read/write access to:</p> <ol> <li>the current (toplevel) <b>URL</b> (an URLbar etc)</li> <li>URL-history (a <b>back/forward</b> button e.g.)</li> <li>Clicking/Touching an <code>href</code> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li> </ol> <h2 id="scaling-instanced-content">Scaling instanced content</h2> <p>Sometimes embedded properties (like <code>src</code>) instance new objects.<br> But what about their scale?<br> How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br></p> <blockquote> <p>Rule of thumb: visible placeholder objects act as a ‘3D canvas’ for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</p> </blockquote> <ol> <li><b>IF</b> an embedded property (<code>src</code> e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):</li> </ol> <ul> <li>calculate the <b>bounding box</b> of the “placeholder” object (maxsize=1.4 e.g.)</li> <li>hide the “placeholder” object (material e.g.)</li> <li>instance the <code>src</code> scene as a child of the existing object</li> <li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li> </ul> <blockquote> <p>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</p> </blockquote> <ol start="2"> <li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li> </ol> <blockquote> <p>TODO: needs intermediate visuals to make things more obvious</p> </blockquote> <h1 id="xr-fragment-queries">XR Fragment queries</h1> <p>Include, exclude, hide/shows objects using space-separated strings:</p> <table> <thead> <tr> <th>example</th> <th>outcome</th> </tr> </thead> <tbody> <tr> <td><code>#q=-sky</code></td> <td>show everything except object named <code>sky</code></td> </tr> <tr> <td><code>#q=-tag:language tag:english</code></td> <td>hide everything with tag <code>language</code>, but show all tag <code>english</code> objects</td> </tr> <tr> <td><code>#q=price:>2 price:<5</code></td> <td>of all objects with property <code>price</code>, show only objects with value between 2 and 5</td> </tr> </tbody> </table> <p>It’s simple but powerful syntax which allows filtering the scene using searchengine prompt-style feeling:</p> <ol> <li>queries are a way to traverse a scene, and filter objects based on their tag- or property-values.</li> <li>words like <code>german</code> match tag-metadata of 3D objects like <code>"tag":"german"</code></li> <li>words like <code>german</code> match (XR Text) objects with (Bib(s)TeX) tags like <code>#KarlHeinz@german</code> or <code>@german{KarlHeinz, ...</code> e.g.</li> </ol> <ul> <li>see <a href="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</a></li> </ul> <h2 id="including-excluding">including/excluding</h2> <table> <thead> <tr> <th>operator</th> <th>info</th> </tr> </thead> <tbody> <tr> <td><code>-</code></td> <td>removes/hides object(s)</td> </tr> <tr> <td><code>:</code></td> <td>indicates an object-embedded custom property key/value</td> </tr> <tr> <td><code>></code> <code><</code></td> <td>compare float or int number</td> </tr> <tr> <td><code>/</code></td> <td>reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <code>src</code>) (*)</td> </tr> </tbody> </table> <blockquote> <p>* = <code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br> <code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p> </blockquote> <p><a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a> <a href="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</a> <a href="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</a></p> <h2 id="query-parser">Query Parser</h2> <p>Here’s how to write a query parser:</p> <ol> <li>create an associative array/object to store query-arguments as objects</li> <li>detect object id’s & properties <code>foo:1</code> and <code>foo</code> (reference regex: <code>/^.*:[><=!]?/</code> )</li> <li>detect excluders like <code>-foo</code>,<code>-foo:1</code>,<code>-.foo</code>,<code>-/foo</code> (reference regex: <code>/^-/</code> )</li> <li>detect root selectors like <code>/foo</code> (reference regex: <code>/^[-]?\//</code> )</li> <li>detect number values like <code>foo:1</code> (reference regex: <code>/^[0-9\.]+$/</code> )</li> <li>for every query token split string on <code>:</code></li> <li>create an empty array <code>rules</code></li> <li>then strip key-operator: convert “-foo” into “foo”</li> <li>add operator and value to rule-array</li> <li>therefore we we set <code>id</code> to <code>true</code> or <code>false</code> (false=excluder <code>-</code>)</li> <li>and we set <code>root</code> to <code>true</code> or <code>false</code> (true=<code>/</code> root selector is present)</li> <li>we convert key ‘/foo’ into ‘foo’</li> <li>finally we add the key/value to the store like <code>store.foo = {id:false,root:true}</code> e.g.</li> </ol> <blockquote> <p>An example query-parser (which compiles to many languages) can be <a href="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</a></p> </blockquote> <h1 id="visible-links">Visible links</h1> <p>When predefined views, XRWG fragments and ID fragments (<code>#cube</code> or <code>#mytag</code> e.g.) are triggered by the enduser (via toplevel URL or clicking <code>href</code>):</p> <ol> <li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li> <li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <code>tag</code> value</li> <li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <code>src</code> or <code>href</code> value</li> </ol> <p>The obvious approach for this, is to consult the XRWG (<a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), which basically has all these things already collected/organized for you during scene-load.</p> <p><strong>UX</strong></p> <ol start="4"> <li>do not update the wires when the enduser moves, leave them as is</li> <li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li> </ol> <h1 id="text-in-xr-tagging-linking-to-spatial-objects">Text in XR (tagging,linking to spatial objects)</h1> <p>How does XR Fragments interlink text with objects?</p> <blockquote> <p>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong> <a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), augmented by Bib(s)Tex.</p> </blockquote> <p>Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br> Perhaps the following question is related: why is HTML adopted less in games outside the browser? Through the lens of constructive lazy game-developers, ideally metadata must come <strong>with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>spawning another request</strong> to fetch it.<br> XR Fragments does this by detecting Bib(s)Tex, without introducing a new language or fileformat<br></p> <blockquote> <p>Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see <a href="https://github.com/coderofsalvation/hashtagbibs#bibs--bibtex-combo-lowest-common-denominator-for-linking-data">further motivation here</a>)</p> </blockquote> <p>Hence:</p> <ol> <li>XR Fragments promotes (de)serializing a scene to the XRWG (<a href="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>)</li> <li>XR Fragments primes the XRWG, by collecting words from the <code>tag</code> and name-property of 3D objects.</li> <li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype & Data URI)</li> <li><a href="https://github.com/coderofsalvation/hashtagbibs">Bib’s</a> and BibTex are first tag citizens for priming the XRWG with words (from XR text)</li> <li>Like Bibs, XR Fragments generalizes the BibTex author/title-semantics (<code>author{title}</code>) into <strong>this</strong> points to <strong>that</strong> (<code>this{that}</code>)</li> <li>The XRWG should be recalculated when textvalues (in <code>src</code>) change</li> <li>HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer)</li> <li>Applications don’t have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li> <li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li> <li>Tags are the scope for now (supporting <a href="https://github.com/WICG/scroll-to-text-fragment">https://github.com/WICG/scroll-to-text-fragment</a> will be considered)</li> </ol> <p>Example:</p> <pre><code> http://y.io/z.fbx | Derived XRWG (expressed as BibTex) ----------------------------------------------------------------------------+-------------------------------------- | @house{castle, +-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle} | Chapter one | | / \ | | } | | | / \ | | @baroque{castle, | John built houses in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle} | | | |_____| | | } | #john@baroque | +-----│-----+ | @baroque{john} | | │ | | | ├─ name: castle | | | └─ tag: house baroque | +----------------------------------------+ | [3D mesh ] | | O ├─ name: john | | /|\ | | | / \ | | +--------+ | </code></pre> <blockquote> <p>the <code>#john@baroque</code>-bib associates both text <code>John</code> and objectname <code>john</code>, with tag <code>baroque</code></p> </blockquote> <p>Another example:</p> <pre><code> http://y.io/z.fbx | Derived XRWG (expressed as BibTex) ----------------------------------------------------------------------------+-------------------------------------- | +-[src: data:.....]----------------------+ +-[3D mesh]-+ | @house{castle, | Chapter one | | / \ | | url = {https://y.io/z.fbx#castle} | | | / \ | | } | John built houses in baroque style. | | / \ | | @baroque{castle, | | | |_____| | | url = {https://y.io/z.fbx#castle} | #john@baroque | +-----│-----+ | } | @baroque{john} | │ | @baroque{john} | | ├─ name: castle | | | └─ tag: house baroque | +----------------------------------------+ | @house{baroque} [3D mesh ] | @todo{baroque} +-[remotestorage.io / localstorage]------+ | O + name: john | | #baroque@todo@house | | /|\ | | | ... | | / \ | | +----------------------------------------+ +--------+ | </code></pre> <blockquote> <p>both <code>#john@baroque</code>-bib and BibTex <code>@baroque{john}</code> result in the same XRWG, however on top of that 2 tages (<code>house</code> and <code>todo</code>) are now associated with text/objectname/tag ‘baroque’.</p> </blockquote> <p>As seen above, the XRWG can expand <a href="https://github.com/coderofsalvation/hashtagbibs">bibs</a> (and the whole scene) to BibTeX.<br> This allows hasslefree authoring and copy-paste of associations <strong>for and by humans</strong>, but also makes these URLs possible:</p> <table> <thead> <tr> <th>URL example</th> <th>Result</th> </tr> </thead> <tbody> <tr> <td><code>https://my.com/foo.gltf#baroque</code></td> <td>draws lines between mesh <code>john</code>, 3D mesh <code>castle</code>, text <code>John built(..)</code></td> </tr> <tr> <td><code>https://my.com/foo.gltf#john</code></td> <td>draws lines between mesh <code>john</code>, and the text <code>John built (..)</code></td> </tr> <tr> <td><code>https://my.com/foo.gltf#house</code></td> <td>draws lines between mesh <code>castle</code>, and other objects with tag <code>house</code> or <code>todo</code></td> </tr> </tbody> </table> <blockquote> <p><a href="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</a> potentially allow the enduser to annotate text/objects by <strong>speaking/typing/scanning associations</strong>, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: <code>https://y.io/z.fbx#@baroque@todo</code> e.g.</p> </blockquote> <p>The XRWG allows XR Browsers to show/hide relationships in realtime at various levels:</p> <ul> <li>wordmatch <strong>inside</strong> <code>src</code> text</li> <li>wordmatch <strong>inside</strong> <code>href</code> text</li> <li>wordmatch object-names</li> <li>wordmatch object-tagnames</li> </ul> <p>Spatial wires can be rendered between words/objects etc.<br> Some pointers for good UX (but not necessary to be XR Fragment compatible):</p> <ol start="9"> <li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li> <li>The XR Browser should always allow the human to view/edit the metadata, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</li> <li>respect multi-line BiBTeX metadata in text because of <a href="#core-principle">the core principle</a></li> <li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <a href="#core-principle">the core principle</a>).</li> <li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <a href="#core-principle">the core principle</a>)</li> <li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li> </ol> <blockquote> <p>The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail.</p> </blockquote> <p>Fictional chat:</p> <pre><code><John> Hey what about this: https://my.com/station.gltf#pos=0,0,1&rot=90,2,0&t=500,1000 <Sarah> I'm checking it right now <Sarah> I don't see everything..where's our text from yesterday? <John> Ah wait, that's tagged with tag 'draft' (and hidden)..hold on, try this: <John> https://my.com/station.gltf#.draft&pos=0,0,1&rot=90,2,0&t=500,1000 <Sarah> how about we link the draft to the upcoming YELLO-event? <John> ok I'm adding #draft@YELLO <Sarah> Yesterday I also came up with other usefull assocations between other texts in the scene: #event#YELLO #2025@YELLO <John> thanks, added. <Sarah> Btw. I stumbled upon this spatial book which references station.gltf in some chapters: <Sarah> https://thecommunity.org/forum/foo/mytrainstory.txt <John> interesting, I'm importing mytrainstory.txt into station.gltf <John> ah yes, chapter three points to trainterminal_2A in the scene, cool </code></pre> <h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2> <p>The <code>src</code>-values work as expected (respecting mime-types), however:</p> <p>The XR Fragment specification bumps the traditional default browser-mimetype</p> <p><code>text/plain;charset=US-ASCII</code></p> <p>to a hashtagbib(tex)-friendly one:</p> <p><code>text/plain;charset=utf-8;bib=^@</code></p> <p>This indicates that:</p> <ul> <li>utf-8 is supported by default</li> <li>lines beginning with <code>@</code> will not be rendered verbatim by default (<a href="https://github.com/coderofsalvation/hashtagbibs#hashtagbib-mimetypes">read more</a>)</li> <li>the XRWG should expand bibs to BibTex occurring in text (<code>#contactjohn@todo@important</code> e.g.)</li> </ul> <p>By doing so, the XR Browser (applications-layer) can interpret microformats (<a href="https://visual-meta.info">visual-meta</a> to connect text further with its environment ( setup links between textual/spatial objects automatically e.g.).</p> <blockquote> <p>for more info on this mimetype see <a href="https://github.com/coderofsalvation/hashtagbibs">bibs</a></p> </blockquote> <p>Advantages:</p> <ul> <li>auto-expanding of <a href="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</a> associations</li> <li>out-of-the-box (de)multiplex human text and metadata in one go (see <a href="#core-principle">the core principle</a>)</li> <li>no network-overhead for metadata (see <a href="#core-principle">the core principle</a>)</li> <li>ensuring high FPS: HTML/RDF historically is too ‘requesty’/‘parsy’ for game studios</li> <li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <a href="#core-principle">the core principle</a>)</li> <li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li> </ul> <blockquote> <p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p> </blockquote> <p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br></p> <h2 id="url-and-data-uri">URL and Data URI</h2> <pre><code> +--------------------------------------------------------------+ +------------------------+ | | | author.com/article.txt | | index.gltf | +------------------------+ | │ | | | | ├── ◻ article_canvas | | Hello friends. | | │ └ src: ://author.com/article.txt | | | | │ | | @book{greatgatsby | | └── ◻ note_canvas | | ... | | └ src:`data:welcome human\n@book{sunday...}` | | } | | | +------------------------+ | | +--------------------------------------------------------------+ </code></pre> <p>The enduser will only see <code>welcome human</code> and <code>Hello friends</code> rendered verbatim (see mimetype). The beauty is that text in Data URI automatically promotes rich copy-paste (retaining metadata). In both cases, the text gets rendered immediately (onto a plane geometry, hence the name ‘_canvas’). The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</p> <blockquote> <p>additional tagging using <a href="https://github.com/coderofsalvation/hashtagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with ‘todo’, the enduser can type or speak <code>#note_canvas@todo</code></p> </blockquote> <h2 id="xr-text-example-parser">XR Text example parser</h2> <p>To prime the XRWG with text from plain text <code>src</code>-values, here’s an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):</p> <pre><code>xrtext = { expandBibs: (text) => { let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}} text.replace( bibs.regex , (m,k,v) => { tok = m.substr(1).split("@") match = tok.shift() if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` ) else if( match.substr(-1) == '#' ) bibs.tags[match] = `@{${match.replace(/#/,'')}}` else bibs.tags[match] = `@${match}{${match},\n}` }) return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n') }, decode: (str) => { // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ] let tags = [], text='', i=0, prop='' let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/) for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ ) text += lines[i]+'\n' bibtex = lines.join('\n').substr( text.length ) bibtex.split( pat[0] ).map( (t) => { try{ let v = {} if( !(t = t.trim()) ) return if( tag = t.match( pat[1] ) ) tag = tag[0] if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag}) if( tag.match( /}$/ ) ) return tags.push({k: tag.replace(/}$/,''), v: {}}) t = t.substr( tag.length ) t.split( pat[2] ) .map( kv => { if( !(kv = kv.trim()) || kv == "}" ) return v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 ) }) tags.push( { k:tag, v } ) }catch(e){ console.error(e) } }) return {text, tags} }, encode: (text,tags) => { let str = text+"\n" for( let i in tags ){ let item = tags[i] if( item.ruler ){ str += `@${item.ruler}\n` continue; } str += `@${item.k}\n` for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n` str += `}\n` } return str } } </code></pre> <p>The above functions (de)multiplexe text/metadata, expands bibs, (de)serialize bibtex and vice versa</p> <blockquote> <p>above can be used as a startingpoint for LLVM’s to translate/steelman to a more formal form/language.</p> </blockquote> <pre><code>str = ` hello world here are some hashtagbibs followed by bibtex: #world #hello@greeting #another-section# @{some-section} @flap{ asdf = {23423} }` var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag tags.push({ k:'bar{', v:{abc:123} }) // add tag console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together </code></pre> <p>This expands to the following (hidden by default) BibTex appendix:</p> <pre><code>hello world here are some hashtagbibs followed by bibtex: @{some-section} @flap{ asdf = {1} } @world{world, } @greeting{hello, } @{another-section} @bar{ abc = {123} } </code></pre> <blockquote> <p>when an XR browser updates the human text, a quick scan for nonmatching tags (<code>@book{nonmatchingbook</code> e.g.) should be performed and prompt the enduser for deleting them.</p> </blockquote> <h1 id="transclusion-broken-link-resolution">Transclusion (broken link) resolution</h1> <p>In spirit of Ted Nelson’s ‘transclusion resolution’, there’s a soft-mechanism to harden links & minimize broken links in various ways:</p> <ol> <li>defining a different transport protocol (https vs ipfs or DAT) in <code>src</code> or <code>href</code> values can make a difference</li> <li>mirroring files on another protocol using (HTTP) errorcode tags in <code>src</code> or <code>href</code> properties</li> <li>in case of <code>src</code>: nesting a copy of the embedded object in the placeholder object (<code>embeddedObject</code>) will not be replaced when the request fails</li> </ol> <blockquote> <p>due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)</p> </blockquote> <p>For example:</p> <pre><code> +────────────────────────────────────────────────────────+ │ │ │ index.gltf │ │ │ │ │ │ #: #q=-offlinetext │ │ │ │ │ ├── ◻ buttonA │ │ │ └ href: http://foo.io/campagne.fbx │ │ │ └ href@404: ipfs://foo.io/campagne.fbx │ │ │ └ href@400: #q=clienterrortext │ │ │ └ ◻ offlinetext │ │ │ │ │ └── ◻ embeddedObject <--------- the meshdata inside embeddedObject will (not) │ └ src: https://foo.io/bar.gltf │ be flushed when the request (does not) succeed. │ └ src@404: http://foo.io/bar.gltf │ So worstcase the 3D data (of the time of publishing index.gltf) │ └ src@400: https://archive.org/l2kj43.gltf │ will be displayed. │ │ +────────────────────────────────────────────────────────+ </code></pre> <h1 id="topic-based-index-less-webrings">Topic-based index-less Webrings</h1> <p>As hashtags in URLs map to the XWRG, <code>href</code>-values can be used to promote topic-based index-less webrings.<br> Consider 3D scenes linking to eachother using these <code>href</code> values:</p> <ul> <li><code>href: schoolA.edu/projects.gltf#math</code></li> <li><code>href: schoolB.edu/projects.gltf#math</code></li> <li><code>href: university.edu/projects.gltf#math</code></li> </ul> <p>These links would all show visible links to math-tagged objects in the scene.<br> To filter out non-related objects one could take it a step further using queries:</p> <ul> <li><code>href: schoolA.edu/projects.gltf#math&q=-topics math</code></li> <li><code>href: schoolB.edu/projects.gltf#math&q=-courses math</code></li> <li><code>href: university.edu/projects.gltf#math&q=-theme math</code></li> </ul> <blockquote> <p>This would hide all object tagged with <code>topic</code>, <code>courses</code> or <code>theme</code> (including math) so that later only objects tagged with <code>math</code> will be visible</p> </blockquote> <p>This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.</p> <h1 id="security-considerations">Security Considerations</h1> <p>Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can :</p> <ul> <li>filter out sensitive data when copy/pasting (XR text with <code>tag:secret</code> e.g.)</li> </ul> <h1 id="faq">FAQ</h1> <p><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br> <strong>A:</strong> Because it’s out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <code>src</code> values)</p> <hr> <p><strong>Q:</strong> Why isn’t there support for scripting, while we have things like WASM <strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragments should never unhyperify itself by hardcoupling to a particular markup or scripting language. <a href="https://xrfragment.org/doc/RFC_XR_Macros.html">XR Macro’s</a> are an example of something which is probably smarter and safer for hypermedia browsers to implement, instead of going full-in with a turing-complete scripting language (and suffer the security consequences later).<br> XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br> Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with hypermedia.<br>This belongs to browser extensions.<br> Non-HTML Hypermedia browsers should make browser extensions the right place, to ‘extend’ experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</p> <h1 id="iana-considerations">IANA Considerations</h1> <p>This document has no IANA actions.</p> <h1 id="acknowledgments">Acknowledgments</h1> <ul> <li><a href="https://nlnet.nl">NLNET</a></li> <li><a href="https://futureoftext.org">Future of Text</a></li> <li><a href="https://visual-meta.info">visual-meta.info</a></li> </ul> <h1 id="appendix-definitions">Appendix: Definitions</h1> <table> <thead> <tr> <th>definition</th> <th>explanation</th> </tr> </thead> <tbody> <tr> <td>human</td> <td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td> </tr> <tr> <td>scene</td> <td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td> </tr> <tr> <td>3D object</td> <td>an object inside a scene characterized by vertex-, face- and customproperty data.</td> </tr> <tr> <td>metadata</td> <td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td> </tr> <tr> <td>XR fragment</td> <td>URI Fragment with spatial hints like <code>#pos=0,0,0&t=1,100</code> e.g.</td> </tr> <tr> <td>the XRWG</td> <td>wordgraph (collapses 3D scene to tags)</td> </tr> <tr> <td>the hashbus</td> <td>hashtags map to camera/scene-projections</td> </tr> <tr> <td>spacetime hashtags</td> <td>positions camera, triggers scene-preset/time</td> </tr> <tr> <td>teleportation</td> <td>repositioning the enduser to a different position (or 3D scene/file)</td> </tr> <tr> <td>sourceportation</td> <td>teleporting the enduser to the original XR Document of an <code>src</code> embedded object.</td> </tr> <tr> <td>placeholder object</td> <td>a 3D object which with src-metadata (which will be replaced by the src-data.)</td> </tr> <tr> <td>src</td> <td>(HTML-piggybacked) metadata of a 3D object which instances content</td> </tr> <tr> <td>href</td> <td>(HTML-piggybacked) metadata of a 3D object which links to content</td> </tr> <tr> <td>query</td> <td>an URI Fragment-operator which queries object(s) from a scene like <code>#q=cube</code></td> </tr> <tr> <td>visual-meta</td> <td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td> </tr> <tr> <td>requestless metadata</td> <td>metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games)</td> </tr> <tr> <td>FPS</td> <td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td> </tr> <tr> <td>introspective</td> <td>inward sensemaking (“I feel this belongs to that”)</td> </tr> <tr> <td>extrospective</td> <td>outward sensemaking (“I’m fairly sure John is a person who lives in oklahoma”)</td> </tr> <tr> <td><code>◻</code></td> <td>ascii representation of an 3D object/mesh</td> </tr> <tr> <td>(un)obtrusive</td> <td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td> </tr> <tr> <td>BibTeX</td> <td>simple tagging/citing/referencing standard for plaintext</td> </tr> <tr> <td>BibTag</td> <td>a BibTeX tag</td> </tr> <tr> <td>(hashtag)bibs</td> <td>an easy to speak/type/scan tagging SDL (<a href="https://github.com/coderofsalvation/hashtagbibs">see here</a> which expands to BibTex/JSON/XML</td> </tr> </tbody> </table> </section> </body> </html>