added Macros RFC

This commit is contained in:
Leon van Kammen 2023-09-07 15:53:32 +02:00
parent 4f5e3f5cea
commit 37e68ef433
5 changed files with 638 additions and 411 deletions

View file

@ -92,12 +92,12 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
<p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?<br> <p>How can we add more features to existing text &amp; 3D scenes, without introducing new dataformats?<br>
Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.<br> Historically, there&rsquo;s many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br> Their lowest common denominator is: (co)authoring using plain text.<br>
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br></p> XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br></p>
<ol> <ol>
<li>addressibility and navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li> <li>addressibility and navigation of 3D scenes/objects: <a href="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using <a href="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> as appendix (see <a href="https://visual-meta.info">visual-meta</a> e.g.)</li> <li>hasslefree tagging across text and spatial objects using <a href="https://github.com/coderofsalvation/tagbibs">bibs</a> / <a href="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> as appendix (see <a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
</ol> </ol>
<blockquote> <blockquote>
@ -113,109 +113,18 @@ This also means that the repair-ability of machine-matters should be human frien
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p> <p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote> </blockquote>
<p>Let&rsquo;s always focus on average humans: the &lsquo;fuzzy symbolical mind&rsquo; must be served first, before serving the greater <a href="https://en.wikipedia.org/wiki/Borg">&lsquo;categorized typesafe RDF hive mind&rsquo;</a>).</p> <p>Let&rsquo;s always focus on average humans: our fuzzy symbolical mind must be served first, before serving a greater <a href="https://en.wikipedia.org/wiki/Borg">categorized typesafe RDF hive mind</a>).</p>
<blockquote> <blockquote>
<p>Humans first, machines (AI) later.</p> <p>Humans first, machines (AI) later.</p>
</blockquote> </blockquote>
<p>Thererfore, XR Fragments does not look at XR (or the web) thru the lens of HTML.<br>
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p>
<h1 id="conventions-and-definitions">Conventions and Definitions</h1> <h1 id="conventions-and-definitions">Conventions and Definitions</h1>
<table> <p>See appendix below in case certain terms are not clear.</p>
<thead>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints like <code>#pos=0,0,0&amp;t=1,100</code> e.g.</td>
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <code>#q=cube</code></td>
</tr>
<tr>
<td>visual-meta</td>
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&ldquo;I feel this belongs to that&rdquo;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&ldquo;I&rsquo;m fairly sure John is a person who lives in oklahoma&rdquo;)</td>
</tr>
<tr>
<td><code></code></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
</tbody>
</table>
<h1 id="list-of-uri-fragments">List of URI Fragments</h1> <h1 id="list-of-uri-fragments">List of URI Fragments</h1>
@ -306,7 +215,7 @@ This also means that the repair-ability of machine-matters should be human frien
</tr> </tr>
</tbody> </tbody>
</table> </table>
<p>Popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREEjs), <code>COLLADA</code> and so on.</p> <p>Popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p>
<blockquote> <blockquote>
<p>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</p> <p>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</p>
@ -958,7 +867,109 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
<h1 id="acknowledgments">Acknowledgments</h1> <h1 id="acknowledgments">Acknowledgments</h1>
<p>TODO acknowledge.</p> <ul>
<li><a href="https://nlnet.nl">NLNET</a></li>
<li><a href="https://futureoftext.org">Future of Text</a></li>
<li><a href="https://visual-meta.info">visual-meta.info</a></li>
</ul>
<h1 id="appendix-definitions">Appendix: Definitions</h1>
<table>
<thead>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints like <code>#pos=0,0,0&amp;t=1,100</code> e.g.</td>
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <code>#q=cube</code></td>
</tr>
<tr>
<td>visual-meta</td>
<td><a href="https://visual.meta.info">visual-meta</a> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games)</td>
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&ldquo;I feel this belongs to that&rdquo;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&ldquo;I&rsquo;m fairly sure John is a person who lives in oklahoma&rdquo;)</td>
</tr>
<tr>
<td><code></code></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
</tbody>
</table>
</section> </section>
</body> </body>

View file

@ -105,11 +105,11 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br> How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br>
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br> Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br> Their lowest common denominator is: (co)authoring using plain text.<br>
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br> XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br>
1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata 1. addressibility and navigation of 3D scenes/objects: [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial metadata
1. hasslefree tagging across text and spatial objects using [BibTags](https://en.wikipedia.org/wiki/BibTeX) as appendix (see [visual-meta](https://visual-meta.info) e.g.) 1. hasslefree tagging across text and spatial objects using [bibs](https://github.com/coderofsalvation/tagbibs) / [BibTags](https://en.wikipedia.org/wiki/BibTeX) as appendix (see [visual-meta](https://visual-meta.info) e.g.)
> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible > NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
@ -120,31 +120,16 @@ This also means that the repair-ability of machine-matters should be human frien
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix" > "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)). Let's always focus on average humans: our fuzzy symbolical mind must be served first, before serving a greater [categorized typesafe RDF hive mind](https://en.wikipedia.org/wiki/Borg)).
> Humans first, machines (AI) later. > Humans first, machines (AI) later.
Thererfore, XR Fragments does not look at XR (or the web) thru the lens of HTML.<br>
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers **can** be implemented on top of HTML/Javascript.
# Conventions and Definitions # Conventions and Definitions
|definition | explanation | See appendix below in case certain terms are not clear.
|----------------------|-------------------------------------------------------------------------------------------------------------------------------|
|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
|XR fragment | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. |
|src | (HTML-piggybacked) metadata of a 3D object which instances content |
|href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
|BibTeX | simple tagging/citing/referencing standard for plaintext |
|BibTag | a BibTeX tag |
# List of URI Fragments # List of URI Fragments
@ -166,7 +151,7 @@ Let's always focus on average humans: the 'fuzzy symbolical mind' must be served
| `href` | string | `"href": "b.gltf"` | available through custom property in 3D fileformats | | `href` | string | `"href": "b.gltf"` | available through custom property in 3D fileformats |
| `src` | string | `"src": "#q=cube"` | available through custom property in 3D fileformats | | `src` | string | `"src": "#q=cube"` | available through custom property in 3D fileformats |
Popular compatible 3D fileformats: `.gltf`, `.obj`, `.fbx`, `.usdz`, `.json` (THREEjs), `COLLADA` and so on. Popular compatible 3D fileformats: `.gltf`, `.obj`, `.fbx`, `.usdz`, `.json` (THREE.js), `.dae` and so on.
> NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too. > NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.
@ -589,4 +574,29 @@ This document has no IANA actions.
# Acknowledgments # Acknowledgments
TODO acknowledge. * [NLNET](https://nlnet.nl)
* [Future of Text](https://futureoftext.org)
* [visual-meta.info](https://visual-meta.info)
# Appendix: Definitions
|definition | explanation |
|----------------------|-------------------------------------------------------------------------------------------------------------------------------|
|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
|XR fragment | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. |
|src | (HTML-piggybacked) metadata of a 3D object which instances content |
|href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
|BibTeX | simple tagging/citing/referencing standard for plaintext |
|BibTag | a BibTeX tag |

View file

@ -69,24 +69,25 @@ Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Core principle . . . . . . . . . . . . . . . . . . . . . . . 3 2. Core principle . . . . . . . . . . . . . . . . . . . . . . . 3
3. Conventions and Definitions . . . . . . . . . . . . . . . . . 3 3. Conventions and Definitions . . . . . . . . . . . . . . . . . 3
4. List of URI Fragments . . . . . . . . . . . . . . . . . . . . 4 4. List of URI Fragments . . . . . . . . . . . . . . . . . . . . 3
5. List of metadata for 3D nodes . . . . . . . . . . . . . . . . 5 5. List of metadata for 3D nodes . . . . . . . . . . . . . . . . 4
6. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 5 6. Navigating 3D . . . . . . . . . . . . . . . . . . . . . . . . 4
7. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 6 7. Embedding 3D content . . . . . . . . . . . . . . . . . . . . 5
8. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 7 8. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 5
8.1. including/excluding . . . . . . . . . . . . . . . . . . . 8 8.1. including/excluding . . . . . . . . . . . . . . . . . . . 6
8.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 8 8.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 7
8.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . . 9 8.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . . 7
9. Text in XR (tagging,linking to spatial objects) . . . . . . . 9 9. Text in XR (tagging,linking to spatial objects) . . . . . . . 8
9.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 12 9.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 11
9.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 13 9.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 12
9.3. Bibs & BibTeX: lowest common denominator for linking 9.3. Bibs & BibTeX: lowest common denominator for linking
data . . . . . . . . . . . . . . . . . . . . . . . . . . 14 data . . . . . . . . . . . . . . . . . . . . . . . . . . 13
9.4. XR Text example parser . . . . . . . . . . . . . . . . . 16 9.4. XR Text example parser . . . . . . . . . . . . . . . . . 15
10. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 18 10. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 17
11. Security Considerations . . . . . . . . . . . . . . . . . . . 18 11. Security Considerations . . . . . . . . . . . . . . . . . . . 17
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19 13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18
14. Appendix: Definitions . . . . . . . . . . . . . . . . . . . . 18
1. Introduction 1. Introduction
@ -94,21 +95,20 @@ Table of Contents
introducing new dataformats? introducing new dataformats?
Historically, there's many attempts to create the ultimate Historically, there's many attempts to create the ultimate
markuplanguage or 3D fileformat. markuplanguage or 3D fileformat.
However, thru the lens of authoring, their lowest common denominator Their lowest common denominator is: (co)authoring using plain text.
is still: plain text.
XR Fragments allows us to enrich/connect existing dataformats, by XR Fragments allows us to enrich/connect existing dataformats, by
recursive use of existing technologies: recursive use of existing technologies:
1. addressibility and navigation of 3D scenes/objects: URI Fragments 1. addressibility and navigation of 3D scenes/objects: URI Fragments
(https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial (https://en.wikipedia.org/wiki/URI_fragment) + src/href spatial
metadata metadata
2. hasslefree tagging across text and spatial objects using BibTags 2. hasslefree tagging across text and spatial objects using bibs
(https://github.com/coderofsalvation/tagbibs) / BibTags
(https://en.wikipedia.org/wiki/BibTeX) as appendix (see visual- (https://en.wikipedia.org/wiki/BibTeX) as appendix (see visual-
meta (https://visual-meta.info) e.g.) meta (https://visual-meta.info) e.g.)
van Kammen Expires 10 March 2024 [Page 2] van Kammen Expires 10 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -128,82 +128,20 @@ Internet-Draft XR Fragments September 2023
| "When a car breaks down, the ones *without* turbosupercharger are | "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix" | easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' Let's always focus on average humans: our fuzzy symbolical mind must
must be served first, before serving the greater 'categorized be served first, before serving a greater categorized typesafe RDF
typesafe RDF hive mind' (https://en.wikipedia.org/wiki/Borg)). hive mind (https://en.wikipedia.org/wiki/Borg)).
| Humans first, machines (AI) later. | Humans first, machines (AI) later.
Thererfore, XR Fragments does not look at XR (or the web) thru the
lens of HTML.
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment
browsers *can* be implemented on top of HTML/Javascript.
3. Conventions and Definitions 3. Conventions and Definitions
+===============+=============================================+ See appendix below in case certain terms are not clear.
| definition | explanation |
+===============+=============================================+
| human | a sentient being who thinks fuzzy, absorbs, |
| | and shares thought (by plain text, not |
| | markuplanguage) |
+---------------+---------------------------------------------+
| scene | a (local/remote) 3D scene or 3D file |
| | (index.gltf e.g.) |
+---------------+---------------------------------------------+
| 3D object | an object inside a scene characterized by |
| | vertex-, face- and customproperty data. |
+---------------+---------------------------------------------+
| metadata | custom properties of text, 3D Scene or |
| | Object(nodes), relevant to machines and a |
| | human minority (academics/developers) |
+---------------+---------------------------------------------+
| XR fragment | URI Fragment with spatial hints like |
| | #pos=0,0,0&t=1,100 e.g. |
+---------------+---------------------------------------------+
| src | (HTML-piggybacked) metadata of a 3D object |
| | which instances content |
+---------------+---------------------------------------------+
| href | (HTML-piggybacked) metadata of a 3D object |
| | which links to content |
+---------------+---------------------------------------------+
van Kammen Expires 10 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
| query | an URI Fragment-operator which queries |
| | object(s) from a scene like #q=cube |
+---------------+---------------------------------------------+
| visual-meta | visual-meta (https://visual.meta.info) data |
| | appended to text/books/papers which is |
| | indirectly visible/editable in XR. |
+---------------+---------------------------------------------+
| requestless | opposite of networked metadata (RDF/HTML |
| metadata | requests can easily fan out into framerate- |
| | dropping, hence not used a lot in games). |
+---------------+---------------------------------------------+
| FPS | frames per second in spatial experiences |
| | (games,VR,AR e.g.), should be as high as |
| | possible |
+---------------+---------------------------------------------+
| introspective | inward sensemaking ("I feel this belongs to |
| | that") |
+---------------+---------------------------------------------+
| extrospective | outward sensemaking ("I'm fairly sure John |
| | is a person who lives in oklahoma") |
+---------------+---------------------------------------------+
| &#9723; | ascii representation of an 3D object/mesh |
+---------------+---------------------------------------------+
| (un)obtrusive | obtrusive: wrapping human text/thought in |
| | XML/HTML/JSON obfuscates human text into a |
| | salad of machine-symbols and words |
+---------------+---------------------------------------------+
| BibTeX | simple tagging/citing/referencing standard |
| | for plaintext |
+---------------+---------------------------------------------+
| BibTag | a BibTeX tag |
+---------------+---------------------------------------------+
Table 1
4. List of URI Fragments 4. List of URI Fragments
@ -218,21 +156,21 @@ Internet-Draft XR Fragments September 2023
+----------+---------+--------------+----------------------------+ +----------+---------+--------------+----------------------------+
| #t | vector2 | #t=500,1000 | sets animation-loop range | | #t | vector2 | #t=500,1000 | sets animation-loop range |
| | | | between frame 500 and 1000 | | | | | between frame 500 and 1000 |
van Kammen Expires 10 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
+----------+---------+--------------+----------------------------+ +----------+---------+--------------+----------------------------+
| #...... | string | #.cubes | object(s) of interest | | #...... | string | #.cubes | object(s) of interest |
| | | #cube | (fragment to object name | | | | #cube | (fragment to object name |
| | | | or class mapping) | | | | | or class mapping) |
+----------+---------+--------------+----------------------------+ +----------+---------+--------------+----------------------------+
Table 2
van Kammen Expires 10 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023
Table 1
| xyz coordinates are similar to ones found in SVG Media Fragments | xyz coordinates are similar to ones found in SVG Media Fragments
@ -254,10 +192,10 @@ Internet-Draft XR Fragments September 2023
| | | "#q=cube" | property in 3D fileformats | | | | "#q=cube" | property in 3D fileformats |
+-------+--------+----------------+----------------------------+ +-------+--------+----------------+----------------------------+
Table 3 Table 2
Popular compatible 3D fileformats: .gltf, .obj, .fbx, .usdz, .json Popular compatible 3D fileformats: .gltf, .obj, .fbx, .usdz, .json
(THREEjs), COLLADA and so on. (THREE.js), .dae and so on.
| NOTE: XR Fragments are file-agnostic, which means that the | NOTE: XR Fragments are file-agnostic, which means that the
| metadata exist in programmatic 3D scene(nodes) too. | metadata exist in programmatic 3D scene(nodes) too.
@ -267,21 +205,6 @@ Internet-Draft XR Fragments September 2023
Here's an ascii representation of a 3D scene-graph which contains 3D Here's an ascii representation of a 3D scene-graph which contains 3D
objects &#9723; and their metadata: objects &#9723; and their metadata:
van Kammen Expires 10 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
+--------------------------------------------------------+ +--------------------------------------------------------+
| | | |
| index.gltf | | index.gltf |
@ -294,6 +217,15 @@ Internet-Draft XR Fragments September 2023
| | | |
+--------------------------------------------------------+ +--------------------------------------------------------+
van Kammen Expires 10 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023
An XR Fragment-compatible browser viewing this scene, allows the end- An XR Fragment-compatible browser viewing this scene, allows the end-
user to interact with the buttonA and buttonB. user to interact with the buttonA and buttonB.
In case of buttonA the end-user will be teleported to another In case of buttonA the end-user will be teleported to another
@ -324,20 +256,6 @@ Internet-Draft XR Fragments September 2023
| | | |
+--------------------------------------------------------+ +--------------------------------------------------------+
van Kammen Expires 10 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
An XR Fragment-compatible browser viewing this scene, lazy-loads and An XR Fragment-compatible browser viewing this scene, lazy-loads and
projects painting.png onto the (plane) object called canvas (which is projects painting.png onto the (plane) object called canvas (which is
copy-instanced in the bed and livingroom). copy-instanced in the bed and livingroom).
@ -357,6 +275,13 @@ Internet-Draft XR Fragments September 2023
* #q=cube&rot=0,90,0 * #q=cube&rot=0,90,0
* #q=price:>2 price:<5 * #q=price:>2 price:<5
van Kammen Expires 10 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
It's simple but powerful syntax which allows <b>css</b>-like class/ It's simple but powerful syntax which allows <b>css</b>-like class/
id-selectors with a searchengine prompt-style feeling: id-selectors with a searchengine prompt-style feeling:
@ -382,18 +307,6 @@ Internet-Draft XR Fragments September 2023
* see an example video here * see an example video here
(https://coderofsalvation.github.io/xrfragment.media/queries.mp4) (https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
van Kammen Expires 10 March 2024 [Page 7]
Internet-Draft XR Fragments September 2023
8.1. including/excluding 8.1. including/excluding
+==========+=================================================+ +==========+=================================================+
@ -416,7 +329,14 @@ Internet-Draft XR Fragments September 2023
| | objects in nested scenes (instanced by src) (*) | | | objects in nested scenes (instanced by src) (*) |
+----------+-------------------------------------------------+ +----------+-------------------------------------------------+
Table 4 Table 3
van Kammen Expires 10 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
| * = #q=-/cube hides object cube only in the root-scene (not nested | * = #q=-/cube hides object cube only in the root-scene (not nested
| cube objects) | cube objects)
@ -441,15 +361,6 @@ Internet-Draft XR Fragments September 2023
3. detect excluders like -foo,-foo:1,-.foo,-/foo (reference regex: 3. detect excluders like -foo,-foo:1,-.foo,-/foo (reference regex:
/^-/ ) /^-/ )
4. detect root selectors like /foo (reference regex: /^[-]?\// ) 4. detect root selectors like /foo (reference regex: /^[-]?\// )
van Kammen Expires 10 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023
5. detect class selectors like .foo (reference regex: /^[-]?class$/ 5. detect class selectors like .foo (reference regex: /^[-]?class$/
) )
6. detect number values like foo:1 (reference regex: /^[0-9\.]+$/ ) 6. detect number values like foo:1 (reference regex: /^[0-9\.]+$/ )
@ -476,6 +387,13 @@ Internet-Draft XR Fragments September 2023
gen-delims = "#" / "&" gen-delims = "#" / "&"
sub-delims = "," / "=" sub-delims = "," / "="
van Kammen Expires 10 March 2024 [Page 7]
Internet-Draft XR Fragments September 2023
| Example: ://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100 | Example: ://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100
+=============================+=================================+ +=============================+=================================+
@ -486,7 +404,7 @@ Internet-Draft XR Fragments September 2023
| pos=1,2,3&rot=0,90,0&q=.foo | combinators | | pos=1,2,3&rot=0,90,0&q=.foo | combinators |
+-----------------------------+---------------------------------+ +-----------------------------+---------------------------------+
Table 5 Table 4
9. Text in XR (tagging,linking to spatial objects) 9. Text in XR (tagging,linking to spatial objects)
@ -498,14 +416,6 @@ Internet-Draft XR Fragments September 2023
Ideally metadata must come *later with* text, but not *obfuscate* the Ideally metadata must come *later with* text, but not *obfuscate* the
text, or *in another* file. text, or *in another* file.
van Kammen Expires 10 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023
| Humans first, machines (AI) later (core principle (#core- | Humans first, machines (AI) later (core principle (#core-
| principle) | principle)
@ -531,6 +441,15 @@ Internet-Draft XR Fragments September 2023
funneling human thought into typesafe, precise, pre-categorized funneling human thought into typesafe, precise, pre-categorized
metadata like RDF (see the core principle (#core-principle)) metadata like RDF (see the core principle (#core-principle))
van Kammen Expires 10 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023
This allows recursive connections between text itself, as well as 3D This allows recursive connections between text itself, as well as 3D
objects and vice versa, using *BibTags* : objects and vice versa, using *BibTags* :
@ -557,7 +476,32 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 10]
van Kammen Expires 10 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -600,7 +544,7 @@ Internet-Draft XR Fragments September 2023
| | node to all nodes) | | | node to all nodes) |
+------------------------------------+-----------------------------+ +------------------------------------+-----------------------------+
Table 6 Table 5
This empowers the enduser spatial expressiveness (see the core This empowers the enduser spatial expressiveness (see the core
principle (#core-principle)): spatial wires can be rendered, words principle (#core-principle)): spatial wires can be rendered, words
@ -613,7 +557,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 11] van Kammen Expires 10 March 2024 [Page 10]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -669,7 +613,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 12] van Kammen Expires 10 March 2024 [Page 11]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -725,7 +669,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 13] van Kammen Expires 10 March 2024 [Page 12]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -781,7 +725,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 14] van Kammen Expires 10 March 2024 [Page 13]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -837,7 +781,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 15] van Kammen Expires 10 March 2024 [Page 14]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -862,7 +806,7 @@ Internet-Draft XR Fragments September 2023
|structures | | | |structures | | |
+----------------+-------------------------------------+---------------+ +----------------+-------------------------------------+---------------+
Table 7 Table 6
9.4. XR Text example parser 9.4. XR Text example parser
@ -893,7 +837,7 @@ xrtext = {
van Kammen Expires 10 March 2024 [Page 16] van Kammen Expires 10 March 2024 [Page 15]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -949,7 +893,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 10 March 2024 [Page 17] van Kammen Expires 10 March 2024 [Page 16]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -1005,7 +949,7 @@ console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back to
van Kammen Expires 10 March 2024 [Page 18] van Kammen Expires 10 March 2024 [Page 17]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -1016,24 +960,80 @@ Internet-Draft XR Fragments September 2023
13. Acknowledgments 13. Acknowledgments
TODO acknowledge. * NLNET (https://nlnet.nl)
* Future of Text (https://futureoftext.org)
* visual-meta.info (https://visual-meta.info)
14. Appendix: Definitions
+===============+==============================================+
| definition | explanation |
+===============+==============================================+
| human | a sentient being who thinks fuzzy, absorbs, |
| | and shares thought (by plain text, not |
| | markuplanguage) |
+---------------+----------------------------------------------+
| scene | a (local/remote) 3D scene or 3D file |
| | (index.gltf e.g.) |
+---------------+----------------------------------------------+
| 3D object | an object inside a scene characterized by |
| | vertex-, face- and customproperty data. |
+---------------+----------------------------------------------+
| metadata | custom properties of text, 3D Scene or |
| | Object(nodes), relevant to machines and a |
| | human minority (academics/developers) |
+---------------+----------------------------------------------+
| XR fragment | URI Fragment with spatial hints like |
| | #pos=0,0,0&t=1,100 e.g. |
+---------------+----------------------------------------------+
| src | (HTML-piggybacked) metadata of a 3D object |
| | which instances content |
+---------------+----------------------------------------------+
| href | (HTML-piggybacked) metadata of a 3D object |
| | which links to content |
+---------------+----------------------------------------------+
| query | an URI Fragment-operator which queries |
| | object(s) from a scene like #q=cube |
+---------------+----------------------------------------------+
| visual-meta | visual-meta (https://visual.meta.info) data |
| | appended to text/books/papers which is |
| | indirectly visible/editable in XR. |
+---------------+----------------------------------------------+
| requestless | metadata which never spawns new requests |
| metadata | (unlike RDF/HTML, which can cause framerate- |
| | dropping, hence not used a lot in games) |
van Kammen Expires 10 March 2024 [Page 18]
Internet-Draft XR Fragments September 2023
+---------------+----------------------------------------------+
| FPS | frames per second in spatial experiences |
| | (games,VR,AR e.g.), should be as high as |
| | possible |
+---------------+----------------------------------------------+
| introspective | inward sensemaking ("I feel this belongs to |
| | that") |
+---------------+----------------------------------------------+
| extrospective | outward sensemaking ("I'm fairly sure John |
| | is a person who lives in oklahoma") |
+---------------+----------------------------------------------+
| &#9723; | ascii representation of an 3D object/mesh |
+---------------+----------------------------------------------+
| (un)obtrusive | obtrusive: wrapping human text/thought in |
| | XML/HTML/JSON obfuscates human text into a |
| | salad of machine-symbols and words |
+---------------+----------------------------------------------+
| BibTeX | simple tagging/citing/referencing standard |
| | for plaintext |
+---------------+----------------------------------------------+
| BibTag | a BibTeX tag |
+---------------+----------------------------------------------+
Table 7

View file

@ -28,14 +28,14 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br /> Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br />
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br /> Their lowest common denominator is: (co)authoring using plain text.<br />
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br /> XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br />
</t> </t>
<ol spacing="compact"> <ol spacing="compact">
<li>addressibility and navigation of 3D scenes/objects: <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + src/href spatial metadata</li> <li>addressibility and navigation of 3D scenes/objects: <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using <eref target="https://en.wikipedia.org/wiki/BibTeX">BibTags</eref> as appendix (see <eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li> <li>hasslefree tagging across text and spatial objects using <eref target="https://github.com/coderofsalvation/tagbibs">bibs</eref> / <eref target="https://en.wikipedia.org/wiki/BibTeX">BibTags</eref> as appendix (see <eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
</ol> </ol>
<blockquote><t>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</t> <blockquote><t>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</t>
</blockquote></section> </blockquote></section>
@ -46,106 +46,16 @@ XR Fragments allows us to enrich/connect existing dataformats, by recursive use
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br /> This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br />
</t> </t>
<blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t> <blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t>
</blockquote><t>Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater <eref target="https://en.wikipedia.org/wiki/Borg">'categorized typesafe RDF hive mind'</eref>).</t> </blockquote><t>Let's always focus on average humans: our fuzzy symbolical mind must be served first, before serving a greater <eref target="https://en.wikipedia.org/wiki/Borg">categorized typesafe RDF hive mind</eref>).</t>
<blockquote><t>Humans first, machines (AI) later.</t> <blockquote><t>Humans first, machines (AI) later.</t>
</blockquote></section> </blockquote><t>Thererfore, XR Fragments does not look at XR (or the web) thru the lens of HTML.<br />
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</t>
</section>
<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name> <section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>
<table> <t>See appendix below in case certain terms are not clear.</t>
<thead> </section>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints like <tt>#pos=0,0,0&amp;t=1,100</tt> e.g.</td>
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <tt>#q=cube</tt></td>
</tr>
<tr>
<td>visual-meta</td>
<td><eref target="https://visual.meta.info">visual-meta</eref> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&quot;I feel this belongs to that&quot;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&quot;I'm fairly sure John is a person who lives in oklahoma&quot;)</td>
</tr>
<tr>
<td><tt></tt></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
</tbody>
</table></section>
<section anchor="list-of-uri-fragments"><name>List of URI Fragments</name> <section anchor="list-of-uri-fragments"><name>List of URI Fragments</name>
<table> <table>
@ -230,7 +140,7 @@ This also means that the repair-ability of machine-matters should be human frien
<td>available through custom property in 3D fileformats</td> <td>available through custom property in 3D fileformats</td>
</tr> </tr>
</tbody> </tbody>
</table><t>Popular compatible 3D fileformats: <tt>.gltf</tt>, <tt>.obj</tt>, <tt>.fbx</tt>, <tt>.usdz</tt>, <tt>.json</tt> (THREEjs), <tt>COLLADA</tt> and so on.</t> </table><t>Popular compatible 3D fileformats: <tt>.gltf</tt>, <tt>.obj</tt>, <tt>.fbx</tt>, <tt>.usdz</tt>, <tt>.json</tt> (THREE.js), <tt>.dae</tt> and so on.</t>
<blockquote><t>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</t> <blockquote><t>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</t>
</blockquote></section> </blockquote></section>
@ -836,9 +746,111 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share
</section> </section>
<section anchor="acknowledgments"><name>Acknowledgments</name> <section anchor="acknowledgments"><name>Acknowledgments</name>
<t>TODO acknowledge.</t>
<ul spacing="compact">
<li><eref target="https://nlnet.nl">NLNET</eref></li>
<li><eref target="https://futureoftext.org">Future of Text</eref></li>
<li><eref target="https://visual-meta.info">visual-meta.info</eref></li>
</ul>
</section> </section>
<section anchor="appendix-definitions"><name>Appendix: Definitions</name>
<table>
<thead>
<tr>
<th>definition</th>
<th>explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>human</td>
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
</tr>
<tr>
<td>scene</td>
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
</tr>
<tr>
<td>3D object</td>
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
</tr>
<tr>
<td>metadata</td>
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
</tr>
<tr>
<td>XR fragment</td>
<td>URI Fragment with spatial hints like <tt>#pos=0,0,0&amp;t=1,100</tt> e.g.</td>
</tr>
<tr>
<td>src</td>
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
</tr>
<tr>
<td>href</td>
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
</tr>
<tr>
<td>query</td>
<td>an URI Fragment-operator which queries object(s) from a scene like <tt>#q=cube</tt></td>
</tr>
<tr>
<td>visual-meta</td>
<td><eref target="https://visual.meta.info">visual-meta</eref> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
</tr>
<tr>
<td>requestless metadata</td>
<td>metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games)</td>
</tr>
<tr>
<td>FPS</td>
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
</tr>
<tr>
<td>introspective</td>
<td>inward sensemaking (&quot;I feel this belongs to that&quot;)</td>
</tr>
<tr>
<td>extrospective</td>
<td>outward sensemaking (&quot;I'm fairly sure John is a person who lives in oklahoma&quot;)</td>
</tr>
<tr>
<td><tt></tt></td>
<td>ascii representation of an 3D object/mesh</td>
</tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
<tr>
<td>BibTeX</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
</tbody>
</table></section>
</middle> </middle>
</rfc> </rfc>

194
doc/RFC_XR_Macros.md Normal file
View file

@ -0,0 +1,194 @@
%%%
Title = "XR Fragments"
area = "Internet"
workgroup = "Internet Engineering Task Force"
[seriesInfo]
name = "XR-Fragments"
value = "draft-XRFRAGMENTS-leonvankammen-00"
stream = "IETF"
status = "informational"
date = 2023-04-12T00:00:00Z
[[author]]
initials="L.R."
surname="van Kammen"
fullname="L.R. van Kammen"
%%%
<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<!--{
<style type="text/css">
body{
font-family: monospace;
max-width: 1000px;
font-size: 15px;
padding: 0% 20%;
line-height: 30px;
color:#555;
background:#F0F0F3
}
h1 { margin-top:40px; }
pre{ line-height:18px; }
a,a:visited,a:active{ color: #70f; }
code{
border: 1px solid #AAA;
border-radius: 3px;
padding: 0px 5px 2px 5px;
}
pre{
line-height: 18px;
overflow: auto;
padding: 12px;
}
pre + code {
background:#DDD;
}
pre>code{
border:none;
border-radius:0px;
padding:0;
}
blockquote{
padding-left: 30px;
margin: 0;
border-left: 5px solid #CCC;
}
th {
border-bottom: 1px solid #000;
text-align: left;
padding-right:45px;
padding-left:7px;
background: #DDD;
}
td {
border-bottom: 1px solid #CCC;
font-size:13px;
}
</style>
<br>
<h1>XR Fragments</h1>
<br>
<pre>
stream: IETF
area: Internet
status: informational
author: Leon van Kammen
date: 2023-04-12T00:00:00Z
workgroup: Internet Engineering Task Force
value: draft-XRFRAGMENTS-leonvankammen-00
</pre>
}-->
.# Abstract
This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and BibTags notation.<br>
> Almost every idea in this document is demonstrated at [https://xrfragment.org](https://xrfragment.org)
{mainmatter}
# Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br>
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br>
Their lowest common denominator is: (co)authoring using plain text.<br>
Therefore, XR Macros allows us to enrich/connect existing dataformats, by offering a polyglot notation based on existing notations:<br>
1. getting/setting common used 3D properties using querystring- or JSON-notation
> NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
# Core principle
1. XR Macros use querystrings, but are HTML-agnostic (though pseudo-XR Fragment browsers **can** be implemented on top of HTML/Javascript).
1. XR Macros represents setting/getting common used properties found in all popular 3D frameworks/(game)editors/internet browsers.
# Conventions and Definitions
See appendix below in case certain terms are not clear.
# List of XR Macros
(XR) Macros can be embedded in 3D assets/scenes.<br></br
Macros allow a limited logic-layer, by recursive (economic) use of the querystring syntax (which the XR Fragment parser already uses).<br>
The only addition is the `|` symbol to roundrobin variable values.<br>
Macros also act as events, so more serious scripting languages can react to them as well.<br>
| custom property | value | assign (rr) variable ? | execute opcode? | show contextmenu? |
|-----------------|--------------------------|------------------------|-----------------|-------------------------------------------|
| &#33;clickme | day&#124;noon&#124;night | yes | not yet | only when multiple props start with &#33; |
| day | bg=1,1,1 | no | yes | no |
| noon | bg=0.5,0.5,0.5 | yes | yes | no |
| night | bg=0,0,0&foo=2 | yes | yes | no |
---
| custom property | value | assign (rr) variable ? | execute opcode? | show contextmenu? |
|--------------------|--------------------------|------------------------|-----------------|-----------------------------|
| &#33;turnofflights | night | no | yes | yes because of &#33;clickme |
| &#33;clickme | day&#124;noon&#124;night | yes | not yet | yes because of &#33;clickme |
| day | bg=1,1,1 | no | yes | no |
| noon | bg=0.5,0.5,0.5 | yes | yes | no |
| night | bg=0,0,0&foo=2 | yes | yes | no |
lazy evaluation:
| custom property | value | copy verbatim to URL? | (rr) variable [assingment]? |
|-----------------|--------------------------|-----------------------|-----------------------------|
| href | #cyclepreset | yes | no |
| cyclepreset | day&#124;noon&#124;night | no | (yes) yes |
| day | bg=1,1,1 | no | yes [yes] |
| noon | bg=0.5,0.5,0.5 | no | yes [yes] |
| night | bg=0,0,0&foo=2 | no | yes [yes] |
# Security Considerations
# IANA Considerations
This document has no IANA actions.
# Acknowledgments
* [NLNET](https://nlnet.nl)
* [Future of Text](https://futureoftext.org)
* [visual-meta.info](https://visual-meta.info)
# Appendix: Definitions
|definition | explanation |
|----------------------|-------------------------------------------------------------------------------------------------------------------------------|
|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
|XR fragment | URI Fragment with spatial hints like `#pos=0,0,0&t=1,100` e.g. |
|src | (HTML-piggybacked) metadata of a 3D object which instances content |
|href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene like `#q=cube` |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text/books/papers which is indirectly visible/editable in XR. |
|requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
|(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
|BibTeX | simple tagging/citing/referencing standard for plaintext |
|BibTag | a BibTeX tag |