843 lines
35 KiB
XML
843 lines
35 KiB
XML
<?xml version="1.0" encoding="utf-8"?>
|
|
<!-- name="GENERATOR" content="github.com/mmarkdown/mmark Mmark Markdown Processor - mmark.miek.nl" -->
|
|
<rfc version="3" ipr="trust200902" docName="draft-XRFRAGMENTS-leonvankammen-00" submissionType="IETF" category="info" xml:lang="en" xmlns:xi="http://www.w3.org/2001/XInclude" indexInclude="true" consensus="true">
|
|
|
|
<front>
|
|
<title>XR Fragments</title><seriesInfo value="draft-XRFRAGMENTS-leonvankammen-00" stream="IETF" status="informational" name="XR-Fragments"></seriesInfo>
|
|
<author initials="L.R." surname="van Kammen" fullname="L.R. van Kammen"><organization></organization><address><postal><street></street>
|
|
</postal></address></author><date/>
|
|
<area>Internet</area>
|
|
<workgroup>Internet Engineering Task Force</workgroup>
|
|
|
|
<abstract>
|
|
<t>This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.<br />
|
|
|
|
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br />
|
|
|
|
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> and BibTeX notation.<br />
|
|
</t>
|
|
<t>Almost every idea in this document is demonstrated at <eref target="https://xrfragment.org">https://xrfragment.org</eref></t>
|
|
</abstract>
|
|
|
|
</front>
|
|
|
|
<middle>
|
|
|
|
<section anchor="introduction"><name>Introduction</name>
|
|
<t>How can we add more features to existing text & 3D scenes, without introducing new dataformats?<br />
|
|
|
|
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.<br />
|
|
|
|
However, thru the lens of authoring, their lowest common denominator is still: plain text.<br />
|
|
|
|
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:<br />
|
|
</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>addressibility and navigation of 3D scenes/objects: <eref target="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + src/href spatial metadata</li>
|
|
<li>hasslefree tagging across text and spatial objects using <eref target="https://en.wikipedia.org/wiki/BibTeX">BibTeX</eref> 'tags' as appendix (see <eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
|
|
</ol>
|
|
<blockquote><t>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="core-principle"><name>Core principle</name>
|
|
<t>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br />
|
|
|
|
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br />
|
|
</t>
|
|
<blockquote><t>"When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix"</t>
|
|
</blockquote><t>Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater <eref target="https://en.wikipedia.org/wiki/Borg">'categorized typesafe RDF hive mind'</eref>).</t>
|
|
<blockquote><t>Humans first, machines (AI) later.</t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>definition</th>
|
|
<th>explanation</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td>human</td>
|
|
<td>a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>scene</td>
|
|
<td>a (local/remote) 3D scene or 3D file (index.gltf e.g.)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>3D object</td>
|
|
<td>an object inside a scene characterized by vertex-, face- and customproperty data.</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>metadata</td>
|
|
<td>custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>XR fragment</td>
|
|
<td>URI Fragment with spatial hints like <tt>#pos=0,0,0&t=1,100</tt> e.g.</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>src</td>
|
|
<td>(HTML-piggybacked) metadata of a 3D object which instances content</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>href</td>
|
|
<td>(HTML-piggybacked) metadata of a 3D object which links to content</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>query</td>
|
|
<td>an URI Fragment-operator which queries object(s) from a scene like <tt>#q=cube</tt></td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>visual-meta</td>
|
|
<td><eref target="https://visual.meta.info">visual-meta</eref> data appended to text/books/papers which is indirectly visible/editable in XR.</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>requestless metadata</td>
|
|
<td>opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games).</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>FPS</td>
|
|
<td>frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>introspective</td>
|
|
<td>inward sensemaking ("I feel this belongs to that")</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>extrospective</td>
|
|
<td>outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma")</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>◻</tt></td>
|
|
<td>ascii representation of an 3D object/mesh</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>(un)obtrusive</td>
|
|
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
|
|
</tr>
|
|
</tbody>
|
|
</table></section>
|
|
|
|
<section anchor="list-of-uri-fragments"><name>List of URI Fragments</name>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>fragment</th>
|
|
<th>type</th>
|
|
<th>example</th>
|
|
<th>info</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td><tt>#pos</tt></td>
|
|
<td>vector3</td>
|
|
<td><tt>#pos=0.5,0,0</tt></td>
|
|
<td>positions camera to xyz-coord 0.5,0,0</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>#rot</tt></td>
|
|
<td>vector3</td>
|
|
<td><tt>#rot=0,90,0</tt></td>
|
|
<td>rotates camera to xyz-coord 0.5,0,0</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>#t</tt></td>
|
|
<td>vector2</td>
|
|
<td><tt>#t=500,1000</tt></td>
|
|
<td>sets animation-loop range between frame 500 and 1000</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>#......</tt></td>
|
|
<td>string</td>
|
|
<td><tt>#.cubes</tt> <tt>#cube</tt></td>
|
|
<td>object(s) of interest (fragment to object name or class mapping)</td>
|
|
</tr>
|
|
</tbody>
|
|
</table><blockquote><t>xyz coordinates are similar to ones found in SVG Media Fragments</t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="list-of-metadata-for-3d-nodes"><name>List of metadata for 3D nodes</name>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>key</th>
|
|
<th>type</th>
|
|
<th>example (JSON)</th>
|
|
<th>info</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td><tt>name</tt></td>
|
|
<td>string</td>
|
|
<td><tt>"name": "cube"</tt></td>
|
|
<td>available in all 3D fileformats & scenes</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>class</tt></td>
|
|
<td>string</td>
|
|
<td><tt>"class": "cubes"</tt></td>
|
|
<td>available through custom property in 3D fileformats</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>href</tt></td>
|
|
<td>string</td>
|
|
<td><tt>"href": "b.gltf"</tt></td>
|
|
<td>available through custom property in 3D fileformats</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>src</tt></td>
|
|
<td>string</td>
|
|
<td><tt>"src": "#q=cube"</tt></td>
|
|
<td>available through custom property in 3D fileformats</td>
|
|
</tr>
|
|
</tbody>
|
|
</table><t>Popular compatible 3D fileformats: <tt>.gltf</tt>, <tt>.obj</tt>, <tt>.fbx</tt>, <tt>.usdz</tt>, <tt>.json</tt> (THREEjs), <tt>COLLADA</tt> and so on.</t>
|
|
<blockquote><t>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="navigating-3d"><name>Navigating 3D</name>
|
|
<t>Here's an ascii representation of a 3D scene-graph which contains 3D objects <tt>◻</tt> and their metadata:</t>
|
|
|
|
<artwork> +--------------------------------------------------------+
|
|
| |
|
|
| index.gltf |
|
|
| │ |
|
|
| ├── ◻ buttonA |
|
|
| │ └ href: #pos=1,0,1&t=100,200 |
|
|
| │ |
|
|
| └── ◻ buttonB |
|
|
| └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc)
|
|
| |
|
|
+--------------------------------------------------------+
|
|
|
|
</artwork>
|
|
<t>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <tt>buttonA</tt> and <tt>buttonB</tt>.<br />
|
|
|
|
In case of <tt>buttonA</tt> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <tt>buttonB</tt> will
|
|
<strong>replace the current scene</strong> with a new one, like <tt>other.fbx</tt>.</t>
|
|
</section>
|
|
|
|
<section anchor="embedding-3d-content"><name>Embedding 3D content</name>
|
|
<t>Here's an ascii representation of a 3D scene-graph with 3D objects <tt>◻</tt> which embeds remote & local 3D objects <tt>◻</tt> (without) using queries:</t>
|
|
|
|
<artwork> +--------------------------------------------------------+ +-------------------------+
|
|
| | | |
|
|
| index.gltf | | ocean.com/aquarium.fbx |
|
|
| │ | | │ |
|
|
| ├── ◻ canvas | | └── ◻ fishbowl |
|
|
| │ └ src: painting.png | | ├─ ◻ bass |
|
|
| │ | | └─ ◻ tuna |
|
|
| ├── ◻ aquariumcube | | |
|
|
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
|
|
| │ |
|
|
| ├── ◻ bedroom |
|
|
| │ └ src: #q=canvas |
|
|
| │ |
|
|
| └── ◻ livingroom |
|
|
| └ src: #q=canvas |
|
|
| |
|
|
+--------------------------------------------------------+
|
|
</artwork>
|
|
<t>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <tt>painting.png</tt> onto the (plane) object called <tt>canvas</tt> (which is copy-instanced in the bed and livingroom).<br />
|
|
|
|
Also, after lazy-loading <tt>ocean.com/aquarium.gltf</tt>, only the queried objects <tt>bass</tt> and <tt>tuna</tt> will be instanced inside <tt>aquariumcube</tt>.<br />
|
|
|
|
Resizing will be happen accordingly to its placeholder object <tt>aquariumcube</tt>, see chapter Scaling.<br />
|
|
</t>
|
|
</section>
|
|
|
|
<section anchor="text-in-xr-tagging-linking-to-spatial-objects"><name>Text in XR (tagging,linking to spatial objects)</name>
|
|
<t>We still think and speak in simple text, not in HTML or RDF.<br />
|
|
|
|
The most advanced human will probably not shout <tt><h1>FIRE!</h1></tt> in case of emergency.<br />
|
|
|
|
Given the new dawn of (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br />
|
|
|
|
Ideally metadata must come <strong>later with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>in another</strong> file.<br />
|
|
</t>
|
|
<blockquote><t>Humans first, machines (AI) later (<eref target="#core-principle">core principle</eref></t>
|
|
</blockquote><t>This way:</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <eref target="https://visual.meta.info">visual-meta</eref>).</li>
|
|
<li>XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX 'tags'</li>
|
|
<li>inline BibTeX 'tags' are the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great (but fits better in the application-layer)</li>
|
|
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <eref target="#core-principle">the core principle</eref>).</li>
|
|
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <eref target="#core-principle">the core principle</eref>)</li>
|
|
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <eref target="#core-principle">the core principle</eref>)</li>
|
|
</ol>
|
|
<t>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BibTeX-tags</strong> :</t>
|
|
|
|
<artwork> +--------------------------------------------------+
|
|
| My Notes |
|
|
| |
|
|
| The houses seen here are built in baroque style. |
|
|
| |
|
|
| @house{houses, <----- XR Fragment triple/tag: phrase-matching BibTeX
|
|
| url = {#.house} <------------------- XR Fragment URI
|
|
| } |
|
|
+--------------------------------------------------+
|
|
</artwork>
|
|
<t>This allows instant realtime tagging of objects at various scopes:</t>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>scope</th>
|
|
<th>matching algo</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td><b id="textual-tagging">textual</b></td>
|
|
<td>text containing 'houses' is now automatically tagged with 'house' (incl. plaintext <tt>src</tt> child nodes)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><b id="spatial-tagging">spatial</b></td>
|
|
<td>spatial object(s) with <tt>"class":"house"</tt> (because of <tt>{#.house}</tt>) are now automatically tagged with 'house' (incl. child nodes)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><b id="supra-tagging">supra</b></td>
|
|
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, named 'house', are automatically tagged with 'house' (current node to root node)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><b id="omni-tagging">omni</b></td>
|
|
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><b id="infinite-tagging">infinite</b></td>
|
|
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes)</td>
|
|
</tr>
|
|
</tbody>
|
|
</table><t>This empowers the enduser spatial expressiveness (see <eref target="#core-principle">the core principle</eref>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br />
|
|
|
|
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by <eref target="https://visual-meta.info">visual-meta</eref> in greater detail.</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
|
|
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li>
|
|
</ol>
|
|
<blockquote><t>NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with <tt>"class":"house"</tt> or name "house". This multiplexing of id/category is deliberate because of <eref target="#core-principle">the core principle</eref>.</t>
|
|
</blockquote>
|
|
<section anchor="default-data-uri-mimetype"><name>Default Data URI mimetype</name>
|
|
<t>The <tt>src</tt>-values work as expected (respecting mime-types), however:</t>
|
|
<t>The XR Fragment specification bumps the traditional default browser-mimetype</t>
|
|
<t><tt>text/plain;charset=US-ASCII</tt></t>
|
|
<t>to a green eco-friendly:</t>
|
|
<t><tt>text/plain;charset=utf-8;bib=^@</tt></t>
|
|
<t>This indicates that <eref target="https://github.com/coderofsalvation/tagbibs">bibs</eref> and <eref target="https://en.wikipedia.org/wiki/BibTeX">bibtags</eref> matching regex <tt>^@</tt> will automatically get filtered out, in order to:</t>
|
|
|
|
<ul spacing="compact">
|
|
<li>automatically detect links between textual/spatial objects</li>
|
|
<li>detect opiniated bibtag appendices (<eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
|
|
</ul>
|
|
<t>It's concept is similar to literate programming, which empower local/remote responses to:</t>
|
|
|
|
<ul spacing="compact">
|
|
<li>(de)multiplex human text and metadata in one go (see <eref target="#core-principle">the core principle</eref>)</li>
|
|
<li>no network-overhead for metadata (see <eref target="#core-principle">the core principle</eref>)</li>
|
|
<li>ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios</li>
|
|
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <eref target="#core-principle">the core principle</eref>)</li>
|
|
<li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
|
|
</ul>
|
|
<blockquote><t>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t>
|
|
</blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br />
|
|
|
|
To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).</t>
|
|
<blockquote><t>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="url-and-data-uri"><name>URL and Data URI</name>
|
|
|
|
<artwork> +--------------------------------------------------------------+ +------------------------+
|
|
| | | author.com/article.txt |
|
|
| index.gltf | +------------------------+
|
|
| │ | | |
|
|
| ├── ◻ article_canvas | | Hello friends. |
|
|
| │ └ src: ://author.com/article.txt | | |
|
|
| │ | | @friend{friends |
|
|
| └── ◻ note_canvas | | ... |
|
|
| └ src:`data:welcome human @...` | | } |
|
|
| | +------------------------+
|
|
| |
|
|
+--------------------------------------------------------------+
|
|
</artwork>
|
|
<t>The enduser will only see <tt>welcome human</tt> and <tt>Hello friends</tt> rendered spatially.
|
|
The beauty is that text (AND visual-meta) in Data URI promotes rich copy-paste.
|
|
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas').
|
|
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</t>
|
|
<t>The mapping between 3D objects and text (src-data) is simple:</t>
|
|
<t>Example:</t>
|
|
|
|
<artwork> +------------------------------------------------------------------------------------+
|
|
| |
|
|
| index.gltf |
|
|
| │ |
|
|
| └── ◻ rentalhouse |
|
|
| └ class: house |
|
|
| └ ◻ note |
|
|
| └ src:`data: todo: call owner |
|
|
| @house{owner, |
|
|
| url = {#.house} |
|
|
| }` |
|
|
+------------------------------------------------------------------------------------+
|
|
</artwork>
|
|
<t>3D object names and/or classes map to <tt>name</tt> of visual-meta glossary-entries.
|
|
This allows rich interaction and interlinking between text and 3D objects:</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>When the user surfs to https://.../index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.</li>
|
|
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.</li>
|
|
</ol>
|
|
</section>
|
|
|
|
<section anchor="bibs-enabled-bibtex-lowest-common-denominator-for-tagging-triples"><name>Bibs-enabled BibTeX: lowest common denominator for tagging/triples</name>
|
|
<blockquote><t>"When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix"</t>
|
|
</blockquote><t>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br />
|
|
|
|
It's a missing sensemaking precursor to (eventual) extrospective RDF.<br />
|
|
|
|
BibTeX-appendices are already used in the digital AND physical world (academic books, <eref target="https://visual-meta.info">visual-meta</eref>), perhaps due to its terseness & simplicity.<br />
|
|
|
|
In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has never leaked into the physical world like BibTex):</t>
|
|
|
|
<ol spacing="compact">
|
|
<li><b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li>
|
|
<li>an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later</li>
|
|
</ol>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>characteristic</th>
|
|
<th>UTF8 Plain Text (with BibTeX)</th>
|
|
<th>RDF</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td>perspective</td>
|
|
<td>introspective</td>
|
|
<td>extrospective</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>structure</td>
|
|
<td>fuzzy (sensemaking)</td>
|
|
<td>precise</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>space/scope</td>
|
|
<td>local</td>
|
|
<td>world</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>everything is text (string)</td>
|
|
<td>yes</td>
|
|
<td>no</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>paperfriendly</td>
|
|
<td><eref target="https://github.com/coderofsalvation/tagbibs">bibs</eref></td>
|
|
<td>no</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>leaves (dictated) text intact</td>
|
|
<td>yes</td>
|
|
<td>no</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>markup language</td>
|
|
<td>just an appendix</td>
|
|
<td>~4 different</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>polyglot format</td>
|
|
<td>no</td>
|
|
<td>yes</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>easy to copy/paste content+metadata</td>
|
|
<td>yes</td>
|
|
<td>up to application</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>easy to write/repair for layman</td>
|
|
<td>yes</td>
|
|
<td>depends</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>easy to (de)serialize</td>
|
|
<td>yes (fits on A4 paper)</td>
|
|
<td>depends</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>infrastructure</td>
|
|
<td>selfcontained (plain text)</td>
|
|
<td>(semi)networked</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>freeform tagging/annotation</td>
|
|
<td>yes, terse</td>
|
|
<td>yes, verbose</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>can be appended to text-content</td>
|
|
<td>yes</td>
|
|
<td>up to application</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>copy-paste text preserves metadata</td>
|
|
<td>yes</td>
|
|
<td>up to application</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>emoji</td>
|
|
<td>yes</td>
|
|
<td>depends on encoding</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>predicates</td>
|
|
<td>free</td>
|
|
<td>semi pre-determined</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>implementation/network overhead</td>
|
|
<td>no</td>
|
|
<td>depends</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>used in (physical) books/PDF</td>
|
|
<td>yes (visual-meta)</td>
|
|
<td>no</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>terse non-verb predicates</td>
|
|
<td>yes</td>
|
|
<td>no</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td>nested structures</td>
|
|
<td>no (but: BibTex rulers)</td>
|
|
<td>yes</td>
|
|
</tr>
|
|
</tbody>
|
|
</table></section>
|
|
|
|
<section anchor="xr-text-example-parser"><name>XR Text example parser</name>
|
|
|
|
<ol spacing="compact">
|
|
<li>The XR Fragments spec does not aim to harden the BiBTeX format</li>
|
|
<li>However, respect multi-line BibTex values because of <eref target="#core-principle">the core principle</eref></li>
|
|
<li>Expand bibs and rulers (like <tt>${visual-meta-start}</tt>) according to the <eref target="https://github.com/coderofsalvation/tagbibs">tagbibs spec</eref></li>
|
|
<li>BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype <tt>text/plain;charset=utf-8;tag=^@</tt></li>
|
|
</ol>
|
|
<t>Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes:</t>
|
|
|
|
<artwork>xrtext = {
|
|
|
|
decode: (str) => {
|
|
// bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end
|
|
let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
|
|
let tags = [], text='', i=0, prop=''
|
|
var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
|
|
let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
|
|
for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
|
|
|
|
bibtex = lines.join('\n').substr( text.length )
|
|
bibtex.replace( bibs.regex , (m,k,v) => {
|
|
tok = m.substr(1).split("@")
|
|
match = tok.shift()
|
|
tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` )
|
|
})
|
|
bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
|
|
bibtex.split( pat[0] ).map( (t) => {
|
|
try{
|
|
let v = {}
|
|
if( !(t = t.trim()) ) return
|
|
if( tag = t.match( pat[1] ) ) tag = tag[0]
|
|
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
|
|
t = t.substr( tag.length )
|
|
t.split( pat[2] )
|
|
.map( kv => {
|
|
if( !(kv = kv.trim()) || kv == "}" ) return
|
|
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
|
|
})
|
|
tags.push( { k:tag, v } )
|
|
}catch(e){ console.error(e) }
|
|
})
|
|
return {text, tags}
|
|
},
|
|
|
|
encode: (text,tags) => {
|
|
let str = text+"\n"
|
|
for( let i in tags ){
|
|
let item = tags[i]
|
|
if( item.ruler ){
|
|
str += `@${item.ruler}\n`
|
|
continue;
|
|
}
|
|
str += `@${item.k}\n`
|
|
for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
|
|
str += `}\n`
|
|
}
|
|
return str
|
|
}
|
|
}
|
|
</artwork>
|
|
<t>The above (de)multiplexes text/metadata, expands bibs, (de)serializes bibtex (and all fits more or less on one A4 paper)</t>
|
|
<blockquote><t>above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.</t>
|
|
</blockquote>
|
|
<artwork>str = `
|
|
hello world
|
|
|
|
@hello@greeting
|
|
@{some-section}
|
|
@flap{
|
|
asdf = {23423}
|
|
}`
|
|
|
|
var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex
|
|
tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag
|
|
tags.push({ k:'bar{', v:{abc:123} }) // add tag
|
|
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
|
|
</artwork>
|
|
|
|
<artwork>@{references-start}
|
|
@misc{emilyHegland/Edgar&Frod,
|
|
author = {Emily Hegland},
|
|
title = {Edgar & Frode Hegland, November 2021},
|
|
year = {2021},
|
|
month = {11},
|
|
}
|
|
</artwork>
|
|
<t>The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 & 5, as well as the <eref target="#core-principle">core principle</eref></t>
|
|
|
|
<artwork>@visual-meta{
|
|
version = {1.1},
|
|
generator = {Author 7.6.2 (1064)},
|
|
section = {visual-meta-header}
|
|
}
|
|
@misc{emilyHegland/Edgar&Frod,
|
|
author = {Emily Hegland},
|
|
title = {Edgar & Frode Hegland, November 2021},
|
|
year = {2021},
|
|
month = {11},
|
|
section = {references}
|
|
}
|
|
</artwork>
|
|
</section>
|
|
</section>
|
|
|
|
<section anchor="hyper-copy-paste"><name>HYPER copy/paste</name>
|
|
<t>The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.
|
|
XR Fragment allows HYPER-copy/paste: time, space and text interlinked.
|
|
Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>time/space: 3D object (current animation-loop)</li>
|
|
<li>text: TeXt object (including BibTeX/visual-meta if any)</li>
|
|
<li>interlinked: Collected objects by visual-meta tag</li>
|
|
</ol>
|
|
</section>
|
|
|
|
<section anchor="xr-fragment-queries"><name>XR Fragment queries</name>
|
|
<t>Include, exclude, hide/shows objects using space-separated strings:</t>
|
|
|
|
<ul spacing="compact">
|
|
<li><tt>#q=cube</tt></li>
|
|
<li><tt>#q=cube -ball_inside_cube</tt></li>
|
|
<li><tt>#q=* -sky</tt></li>
|
|
<li><tt>#q=-.language .english</tt></li>
|
|
<li><tt>#q=cube&rot=0,90,0</tt></li>
|
|
<li><tt>#q=price:>2 price:<5</tt></li>
|
|
</ul>
|
|
<t>It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>queries are showing/hiding objects <strong>only</strong> when defined as <tt>src</tt> value (prevents sharing of scene-tampered URL's).</li>
|
|
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
|
|
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against 3D object names or custom metadata-key(values)</li>
|
|
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against tags (BibTeX) inside plaintext <tt>src</tt> values like <tt>@cube{redcube, ...</tt> e.g.</li>
|
|
<li><tt>#</tt> equals <tt>#q=*</tt></li>
|
|
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of 3D objects like <tt>"class":"german"</tt></li>
|
|
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of (BibTeX) tags in XR Text objects like <tt>@german{KarlHeinz, ...</tt> e.g.</li>
|
|
</ol>
|
|
<blockquote><t><strong>For example</strong>: <tt>#q=.foo</tt> is a shorthand for <tt>#q=class:foo</tt>, which will select objects with custom property <tt>class</tt>:<tt>foo</tt>. Just a simple <tt>#q=cube</tt> will simply select an object named <tt>cube</tt>.</t>
|
|
</blockquote>
|
|
<ul spacing="compact">
|
|
<li>see <eref target="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an example video here</eref></li>
|
|
</ul>
|
|
|
|
<section anchor="including-excluding"><name>including/excluding</name>
|
|
<table>
|
|
<thead>
|
|
<tr>
|
|
<th>operator</th>
|
|
<th>info</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td><tt>*</tt></td>
|
|
<td>select all objects (only useful in <tt>src</tt> custom property)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>-</tt></td>
|
|
<td>removes/hides object(s)</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>:</tt></td>
|
|
<td>indicates an object-embedded custom property key/value</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>.</tt></td>
|
|
<td>alias for <tt>"class" :".foo"</tt> equals <tt>class:foo</tt></td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>></tt> <tt><</tt></td>
|
|
<td>compare float or int number</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>/</tt></td>
|
|
<td>reference to root-scene.<br />
|
|
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <tt>src</tt>) (*)</td>
|
|
</tr>
|
|
</tbody>
|
|
</table><blockquote><t>* = <tt>#q=-/cube</tt> hides object <tt>cube</tt> only in the root-scene (not nested <tt>cube</tt> objects)<br />
|
|
<tt>#q=-cube</tt> hides both object <tt>cube</tt> in the root-scene <b>AND</b> nested <tt>skybox</tt> objects |</t>
|
|
</blockquote><t><eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</eref>
|
|
<eref target="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</eref>
|
|
<eref target="https://github.com/coderofsalvation/xrfragment/issues/3">» discussion</eref></t>
|
|
</section>
|
|
|
|
<section anchor="query-parser"><name>Query Parser</name>
|
|
<t>Here's how to write a query parser:</t>
|
|
|
|
<ol spacing="compact">
|
|
<li>create an associative array/object to store query-arguments as objects</li>
|
|
<li>detect object id's & properties <tt>foo:1</tt> and <tt>foo</tt> (reference regex: <tt>/^.*:[><=!]?/</tt> )</li>
|
|
<li>detect excluders like <tt>-foo</tt>,<tt>-foo:1</tt>,<tt>-.foo</tt>,<tt>-/foo</tt> (reference regex: <tt>/^-/</tt> )</li>
|
|
<li>detect root selectors like <tt>/foo</tt> (reference regex: <tt>/^[-]?\//</tt> )</li>
|
|
<li>detect class selectors like <tt>.foo</tt> (reference regex: <tt>/^[-]?class$/</tt> )</li>
|
|
<li>detect number values like <tt>foo:1</tt> (reference regex: <tt>/^[0-9\.]+$/</tt> )</li>
|
|
<li>expand aliases like <tt>.foo</tt> into <tt>class:foo</tt></li>
|
|
<li>for every query token split string on <tt>:</tt></li>
|
|
<li>create an empty array <tt>rules</tt></li>
|
|
<li>then strip key-operator: convert "-foo" into "foo"</li>
|
|
<li>add operator and value to rule-array</li>
|
|
<li>therefore we we set <tt>id</tt> to <tt>true</tt> or <tt>false</tt> (false=excluder <tt>-</tt>)</li>
|
|
<li>and we set <tt>root</tt> to <tt>true</tt> or <tt>false</tt> (true=<tt>/</tt> root selector is present)</li>
|
|
<li>we convert key '/foo' into 'foo'</li>
|
|
<li>finally we add the key/value to the store like <tt>store.foo = {id:false,root:true}</tt> e.g.</li>
|
|
</ol>
|
|
<blockquote><t>An example query-parser (which compiles to many languages) can be <eref target="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</eref></t>
|
|
</blockquote></section>
|
|
|
|
<section anchor="xr-fragment-uri-grammar"><name>XR Fragment URI Grammar</name>
|
|
|
|
<artwork>reserved = gen-delims / sub-delims
|
|
gen-delims = "#" / "&"
|
|
sub-delims = "," / "="
|
|
</artwork>
|
|
<blockquote><t>Example: <tt>://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100</tt></t>
|
|
</blockquote><table>
|
|
<thead>
|
|
<tr>
|
|
<th>Demo</th>
|
|
<th>Explanation</th>
|
|
</tr>
|
|
</thead>
|
|
|
|
<tbody>
|
|
<tr>
|
|
<td><tt>pos=1,2,3</tt></td>
|
|
<td>vector/coordinate argument e.g.</td>
|
|
</tr>
|
|
|
|
<tr>
|
|
<td><tt>pos=1,2,3&rot=0,90,0&q=.foo</tt></td>
|
|
<td>combinators</td>
|
|
</tr>
|
|
</tbody>
|
|
</table></section>
|
|
</section>
|
|
|
|
<section anchor="security-considerations"><name>Security Considerations</name>
|
|
<t>Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can :</t>
|
|
|
|
<ul spacing="compact">
|
|
<li>filter out sensitive data when copy/pasting (XR text with <tt>class:secret</tt> e.g.)</li>
|
|
</ul>
|
|
</section>
|
|
|
|
<section anchor="iana-considerations"><name>IANA Considerations</name>
|
|
<t>This document has no IANA actions.</t>
|
|
</section>
|
|
|
|
<section anchor="acknowledgments"><name>Acknowledgments</name>
|
|
<t>TODO acknowledge.</t>
|
|
</section>
|
|
|
|
</middle>
|
|
|
|
</rfc>
|