<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<p>This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.
The specification promotes spatial addressibility, sharing, navigation, query-ing and interactive text across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a>&<ahref="https://visual-meta.info">visual-meta</a>.</p>
<sectiondata-matter="main">
<h1id="introduction">Introduction</h1>
<p>How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there’s many attempts to create the ultimate markuplanguage or 3D fileformat.
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
<strong>replace the current scene</strong> with a new one (<code>other.fbx</code>).</p>
<p>Text in XR has to be unobtrusive, for readers as well as authors.
We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched <em>afterwards</em> (lazy metadata).
Therefore, XR Fragment-compliant text will just be plain text, and <strong>not yet-another-markuplanguage</strong>.
In contrast to markup languages, this means humans need to be always served first, and machines later.</p>
<blockquote>
<p>Basically, a direct feedbackloop between unobtrusive text and human eye.</p>
</blockquote>
<p>Reality has shown that outsourcing rich textmanipulation to commercial formats or mono-markup browsers (HTML) have there usecases, but
also introduce barriers to thought-translation (which uses simple words).
As Marshall MCluhan said: we have become irrevocably involved with, and responsible for, each other.</p>
<p>In order enjoy hasslefree batteries-included programmable text (glossaries, flexible views, drag-drop e.g.), XR Fragment supports
<p>The difference is that text (+visual-meta data) in Data URI is saved into the scene, which also promotes rich copy-paste.
In both cases will the text get rendered immediately (onto a plane geometry, hence the name ‘_canvas’).
The enduser can access visual-meta(data)-fields only after interacting with the object.</p>
<blockquote>
<p>NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru <code>src</code>-metadata, it is just that <code>text/plain;charset=utf-8;visual-meta=1</code> is the minimum requirement.</p>
<h1id="embedding-3d-content">Embedding 3D content</h1>
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects (<code>◻</code>) which embeds remote & local 3D objects (<code>◻</code>) (without) using queries:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.