xrfragment/doc/RFC_XR_Fragments.html

278 lines
14 KiB
HTML
Raw Normal View History

2023-09-01 14:20:02 +02:00
<!DOCTYPE html>
<html>
<head>
<title>XR Fragments</title>
<meta name="GENERATOR" content="github.com/mmarkdown/mmark Mmark Markdown Processor - mmark.miek.nl">
<meta charset="utf-8">
</head>
<body>
<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<style type="text/css">
body{
font-family: monospace;
max-width: 900px;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
color:#555;
background:#F0F0F3
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
2023-09-02 21:44:57 +02:00
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
2023-09-01 14:20:02 +02:00
</style>
<br>
<h1>XR Fragments</h1>
<br>
<pre>
stream: IETF
area: Internet
status: informational
author: Leon van Kammen
date: 2023-04-12T00:00:00Z
workgroup: Internet Engineering Task Force
value: draft-XRFRAGMENTS-leonvankammen-00
</pre>
<h1 class="special" id="abstract">Abstract</h1>
<p>This draft offers a specification for 4D URLs &amp; navigation, to link 3D scenes and text together with- or without a network-connection.
The specification promotes spatial addressibility, sharing, navigation, query-ing and interactive text across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies like <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> &amp; <a href="https://visual-meta.info">visual-meta</a>.</p>
<section data-matter="main">
<h1 id="introduction">Introduction</h1>
<p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?
Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.
2023-09-02 21:44:57 +02:00
However, thru the lens of authoring their lowest common denominator is still: plain text.
2023-09-01 14:20:02 +02:00
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:</p>
<ul>
<li>addressibility &amp; navigation of 3D objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + (src/href) metadata</li>
<li>addressibility &amp; navigation of text objects: <a href="https://visual-meta.info">visual-meta</a></li>
</ul>
<h1 id="conventions-and-definitions">Conventions and Definitions</h1>
<ul>
<li>scene: a (local/remote) 3D scene or 3D file (index.gltf e.g.)</li>
<li>3D object: an object inside a scene characterized by vertex-, face- and customproperty data.</li>
<li>metadata: custom properties defined in 3D Scene or Object(nodes)</li>
<li>XR fragment: URI Fragment with spatial hints (<code>#pos=0,0,0&amp;t=1,100</code> e.g.)</li>
<li>src: a (HTML-piggybacked) metadata-attribute of a 3D object which instances content</li>
<li>href: a (HTML-piggybacked) metadata-attribute of a 3D object which links to content</li>
<li>query: an URI Fragment-operator which queries object(s) from a scene (<code>#q=cube</code>)</li>
2023-09-02 21:44:57 +02:00
<li><a href="https://visual.meta.info">visual-meta</a>: metadata appended to text which is only indirectly visible/editable in XR.</li>
2023-09-01 14:20:02 +02:00
</ul>
<p>{::boilerplate bcp14-tagged}</p>
<h1 id="navigating-3d">Navigating 3D</h1>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph which contains 3D objects (<code></code>) and their metadata:</p>
2023-09-02 21:44:57 +02:00
<pre><code> +--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&amp;t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx |
| |
+--------------------------------------------------------+
2023-09-01 14:20:02 +02:00
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
<strong>replace the current scene</strong> with a new one (<code>other.fbx</code>).</p>
<h1 id="navigating-text">Navigating text</h1>
2023-09-02 21:44:57 +02:00
<p>Text in XR has to be unobtrusive, for readers as well as authors.
We think and speak in simple text, and given the new paradigm of XR interfaces, logically (spoken) text must be enriched <em>afterwards</em> (lazy metadata).
Therefore, XR Fragment-compliant text will just be plain text, and <strong>not yet-another-markuplanguage</strong>.
In contrast to markup languages, this means humans need to be always served first, and machines later.</p>
<blockquote>
<p>Basically, a direct feedbackloop between unobtrusive text and human eye.</p>
</blockquote>
<p>Reality has shown that outsourcing rich textmanipulation to commercial formats or mono-markup browsers (HTML) have there usecases, but
also introduce barriers to thought-translation (which uses simple words).
As Marshall MCluhan said: we have become irrevocably involved with, and responsible for, each other.</p>
<p>In order enjoy hasslefree batteries-included programmable text (glossaries, flexible views, drag-drop e.g.), XR Fragment supports
<a href="https://visual.meta.info">visual-meta</a>(data).</p>
<h2 id="default-data-uri-mimetype">Default Data URI mimetype</h2>
<p>The XR Fragment specification bumps the traditional default browser-mimetype</p>
<p><code>text/plain;charset=US-ASCII</code></p>
<p>into:</p>
<p><code>text/plain;charset=utf-8;visual-meta=1</code></p>
<p>This means that <a href="https://visual.meta.info">visual-meta</a>(data) can be appended to plain text without being displayed.</p>
<h3 id="url-and-data-uri">URL and Data URI</h3>
<pre><code> +--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @{visual-meta-start} |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @{visual-meta-start}...` | +------------------------+
| |
| |
+--------------------------------------------------------------+
</code></pre>
<p>The difference is that text (+visual-meta data) in Data URI is saved into the scene, which also promotes rich copy-paste.
In both cases will the text get rendered immediately (onto a plane geometry, hence the name &lsquo;_canvas&rsquo;).
The enduser can access visual-meta(data)-fields only after interacting with the object.</p>
<blockquote>
<p>NOTE: this is not to say that XR Browsers should not load HTML/PDF/etc-URLs thru <code>src</code>-metadata, it is just that <code>text/plain;charset=utf-8;visual-meta=1</code> is the minimum requirement.</p>
</blockquote>
<h2 id="omnidirectional-xr-annotations">omnidirectional XR annotations</h2>
<pre><code> +---------------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ todo |
| │ └ src:`data:learn about ARC @{visual-meta-start}...`|
| │ |
| └── ◻ ARC |
| └── ◻ plane |
| └ src: `data:ARC was revolutionary |
| @{visual-meta-start} |
| @{glossary-start} |
| @entry{ |
| name = {ARC}, |
| description = {Engelbart Concept: |
| Augmentation Research Center, |
| The name of Doug's lab at SRI. |
| }, |
| }` |
| |
+---------------------------------------------------------------+
</code></pre>
<p>Here we can see an 3D object of ARC, to which the enduser added a textnote (basically a plane geometry with <code>src</code>).
The enduser can view/edit visual-meta(data)-fields only after interacting with the object.
This allows the 3D scene to perform omnidirectional features for free, by omni-connecting the word &lsquo;ARC&rsquo;:</p>
<ul>
<li>the ARC object can draw a line to the &lsquo;ARC was revolutionary&rsquo;-note</li>
<li>the &lsquo;ARC was revolutionary&rsquo;-note can draw line to the &lsquo;learn about ARC&rsquo;-note</li>
<li>the &lsquo;learn about ARC&rsquo;-note can draw a line to the ARC 3D object</li>
</ul>
<h1 id="hyper-copy-paste">HYPER copy/paste</h1>
<p>The previous example, offers something exciting compared to simple textual copy-paste.
, XR Fragment offers 4D- and HYPER- copy/paste: time, space and text interlinked.
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</p>
<ul>
<li>copy ARC 3D object (incl. animation) &amp; paste elsewhere including visual-meta(data)</li>
<li>select the word ARC in any text, and paste a bundle of anything ARC-related</li>
</ul>
<h2 id="plain-text-with-optional-visual-meta">Plain Text (with optional visual-meta)</h2>
<p>In contrast to markuplanguage, the (dictated/written) text needs no parsing, stays intact, by postponing metadata to the appendix.</p>
<p>This allows for a very economic XR way to:</p>
<ul>
<li>directly write, dictate, render text (=fast, without markup-parser-overhead)</li>
<li>add/load metadata later (if provided)</li>
<li>enduser interactions with text (annotations,mutations) can be reflected back into the visual-meta(data) Data URI</li>
<li>copy/pasting of text will automatically cite the (mutated) source</li>
<li>allows annotating 3D objects as if they were textual representations (convert 3D document to text)</li>
</ul>
<blockquote>
<p>NOTE: visualmeta never breaks the original intended text (in contrast to forgetting a html closing-tag e.g.)</p>
</blockquote>
2023-09-01 14:20:02 +02:00
<h1 id="embedding-3d-content">Embedding 3D content</h1>
<p>Here&rsquo;s an ascii representation of a 3D scene-graph with 3D objects (<code></code>) which embeds remote &amp; local 3D objects (<code></code>) (without) using queries:</p>
2023-09-02 21:44:57 +02:00
<pre><code> +--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
2023-09-01 14:20:02 +02:00
</code></pre>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).
2023-09-02 21:44:57 +02:00
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.
2023-09-01 14:20:02 +02:00
Resizing will be happen accordingly to its placeholder object (<code>aquariumcube</code>), see chapter Scaling.</p>
<h1 id="list-of-xr-uri-fragments">List of XR URI Fragments</h1>
<h1 id="security-considerations">Security Considerations</h1>
<p>TODO Security</p>
<h1 id="iana-considerations">IANA Considerations</h1>
<p>This document has no IANA actions.</p>
<h1 id="acknowledgments">Acknowledgments</h1>
<p>TODO acknowledge.</p>
</section>
</body>
</html>