update documentation

This commit is contained in:
Leon van Kammen 2023-09-06 15:13:36 +02:00
parent ba8f3155bb
commit c50c9adbcf
4 changed files with 612 additions and 449 deletions

View file

@ -199,6 +199,11 @@ This also means that the repair-ability of machine-matters should be human frien
<td><code></code></td> <td><code></code></td>
<td>ascii representation of an 3D object/mesh</td> <td>ascii representation of an 3D object/mesh</td>
</tr> </tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
</tbody> </tbody>
</table> </table>
@ -422,8 +427,8 @@ Ideally metadata must come <strong>later with</strong> text, but not <strong>obf
The simplicity of appending BibTeX &lsquo;tags&rsquo; (humans first, machines later) is also demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail.</p> The simplicity of appending BibTeX &lsquo;tags&rsquo; (humans first, machines later) is also demonstrated by <a href="https://visual-meta.info">visual-meta</a> in greater detail.</p>
<ol> <ol>
<li>The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: <code>[text, spatial, text+spatial, supra, omni, infinite]</code></li> <li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</li> <li>The XR Browser should always allow the human to view/edit the metadata, by clicking &lsquo;toggle metadata&rsquo; on the &lsquo;back&rsquo; (contextmenu e.g.) of any XR text, anywhere anytime.</li>
</ol> </ol>
<blockquote> <blockquote>
@ -440,31 +445,31 @@ The simplicity of appending BibTeX &lsquo;tags&rsquo; (humans first, machines la
<p>to a green eco-friendly:</p> <p>to a green eco-friendly:</p>
<p><code>text/plain;charset=utf-8;bibtex=^@</code></p> <p><code>text/plain;charset=utf-8;bib=^@</code></p>
<p>This indicates that any bibtex metadata starting with <code>@</code> will automatically get filtered out and:</p> <p>This indicates that <a href="https://github.com/coderofsalvation/tagbibs">bibs</a> and <a href="https://en.wikipedia.org/wiki/BibTeX">bibtags</a> matching regex <code>^@</code> will automatically get filtered out, in order to:</p>
<ul> <ul>
<li>automatically detects textual links between textual and spatial objects</li> <li>automatically detect links between textual/spatial objects</li>
<li>detect opiniated bibtag appendices (<a href="https://visual-meta.info">visual-meta</a> e.g.)</li>
</ul> </ul>
<p>It&rsquo;s concept is similar to literate programming. <p>It&rsquo;s concept is similar to literate programming, which empower local/remote responses to:</p>
Its implications are that local/remote responses can now:</p>
<ul> <ul>
<li>(de)multiplex/repair human text and requestless metadata (see <a href="#core-principle">the core principle</a>)</li> <li>(de)multiplex human text and metadata in one go (see <a href="#core-principle">the core principle</a>)</li>
<li>no separated implementation/network-overhead for metadata (see <a href="#core-principle">the core principle</a>)</li> <li>no network-overhead for metadata (see <a href="#core-principle">the core principle</a>)</li>
<li>ensuring high FPS: HTML/RDF historically is too &lsquo;requesty&rsquo; for game studios</li> <li>ensuring high FPS: HTML/RDF historically is too &lsquo;requesty&rsquo;/&lsquo;parsy&rsquo; for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <a href="#core-principle">the core principle</a>)</li> <li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <a href="#core-principle">the core principle</a>)</li>
<li>less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR</li> <li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
</ul> </ul>
<blockquote> <blockquote>
<p>This significantly expands expressiveness and portability of human text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p> <p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
</blockquote> </blockquote>
<p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br> <p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</p> To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).</p>
<blockquote> <blockquote>
<p>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</p> <p>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</p>
@ -509,25 +514,24 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)-
+------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+
</code></pre> </code></pre>
<p>Attaching visualmeta as <code>src</code> metadata to the (root) scene-node hints the XR Fragment browser. <p>3D object names and/or classes map to <code>name</code> of visual-meta glossary-entries.
3D object names and classes map to <code>name</code> of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:</p> This allows rich interaction and interlinking between text and 3D objects:</p>
<ol> <ol>
<li>When the user surfs to https://&hellip;/index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.</li> <li>When the user surfs to https://&hellip;/index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li> <li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.</li>
</ol> </ol>
<h2 id="bibtex-as-lowest-common-denominator-for-tagging-triples">BibTeX as lowest common denominator for tagging/triples</h2> <h2 id="bibs-enabled-bibtex-lowest-common-denominator-for-tagging-triples">Bibs-enabled BibTeX: lowest common denominator for tagging/triples</h2>
<blockquote> <blockquote>
<p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p> <p>&ldquo;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&rdquo;</p>
</blockquote> </blockquote>
<p>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br> <p>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br> It&rsquo;s a missing sensemaking precursor to (eventual) extrospective RDF.<br>
BibTeX-appendices are already used in the digital AND physical world (academic books, <a href="https://visual-meta.info">visual-meta</a>), perhaps due to its terseness &amp; simplicity.<br> BibTeX-appendices are already used in the digital AND physical world (academic books, <a href="https://visual-meta.info">visual-meta</a>), perhaps due to its terseness &amp; simplicity.<br>
In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (which has never leaked into the physical book-world):</p> In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (which has never leaked into the physical world like BibTex):</p>
<ol> <ol>
<li><b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li> <li><b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li>
@ -568,6 +572,12 @@ In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (whi
<td>no</td> <td>no</td>
</tr> </tr>
<tr>
<td>paperfriendly</td>
<td><a href="https://github.com/coderofsalvation/tagbibs">bibs</a></td>
<td>no</td>
</tr>
<tr> <tr>
<td>leaves (dictated) text intact</td> <td>leaves (dictated) text intact</td>
<td>yes</td> <td>yes</td>
@ -660,81 +670,121 @@ In that sense, it&rsquo;s one step up from the <code>.ini</code> fileformat (whi
<tr> <tr>
<td>nested structures</td> <td>nested structures</td>
<td>no</td> <td>no (but: BibTex rulers)</td>
<td>yes</td> <td>yes</td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
<h2 id="xr-text-w-bibtex-example-parser">XR Text (w. BibTeX) example parser</h2> <h2 id="xr-text-example-parser">XR Text example parser</h2>
<p>Here&rsquo;s a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):</p> <ol>
<li>The XR Fragments spec does not aim to harden the BiBTeX format</li>
<li>However, respect multi-line BibTex values because of <a href="#core-principle">the core principle</a></li>
<li>Expand bibs and rulers (like <code>${visual-meta-start}</code>) according to the <a href="https://github.com/coderofsalvation/tagbibs">tagbibs spec</a></li>
<li>BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype <code>text/plain;charset=utf-8;tag=^@</code></li>
</ol>
<p>Here&rsquo;s an XR Text (de)multiplexer in javascript, which ticks all the above boxes:</p>
<pre><code>xrtext = { <pre><code>xrtext = {
decode: { decode: (str) =&gt; {
text: (str) =&gt; { // bibtex: ↓@ ↓&lt;tag|tag{phrase,|{ruler}&gt; ↓property ↓end
let meta={}, text='', last='', data = ''; let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
str.split(/\r?\n/).map( (line) =&gt; { let tags = [], text='', i=0, prop=''
if( !data ) data = last === '' &amp;&amp; line.match(/^@/) ? line[0] : '' var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
if( data ){ let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
if( line === '' ){ for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
xrtext.decode.bibtex(data.substr(1),meta)
data='' bibtex = lines.join('\n').substr( text.length )
}else data += `${line}\n` bibtex.replace( bibs.regex , (m,k,v) =&gt; {
} tok = m.substr(1).split(&quot;@&quot;)
text += data ? '' : `${line}\n` match = tok.shift()
last=line tok.map( (t) =&gt; bibs.tags[match] = `@${t}{${match},\n}\n` )
})
bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
bibtex.split( pat[0] ).map( (t) =&gt; {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv =&gt; {
if( !(kv = kv.trim()) || kv == &quot;}&quot; ) return
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf(&quot;{&quot;)+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
}) })
return {text, meta} return {text, tags}
},
bibtex: (str,meta) =&gt; {
let st = [meta]
str
.split(/\r?\n/ )
.map( s =&gt; s.trim() ).join(&quot;\n&quot;) // be nice
.replace( /}@/, &quot;}\n@&quot; ) // to authors
.replace( /},}/, &quot;},\n}&quot; ) // which struggle
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c =&gt; c.trim() ) // actual processing:
.map( (s) =&gt; {
if( s.match(/(^}|-end})/) &amp;&amp; st.length &gt; 1 ) st.shift()
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} )
else s.replace( /(\w+)\s*=\s*{(.*)}(,)?/g, (m,k,v) =&gt; st[0][k] = v )
})
return meta
}
}, },
encode: (text,meta) =&gt; { encode: (text,tags) =&gt; {
if( text === false ){ let str = text+&quot;\n&quot;
if (typeof meta === &quot;object&quot;) { for( let i in tags ){
return Object.keys(meta).map(k =&gt; let item = tags[i]
typeof meta[k] == &quot;string&quot; if( item.ruler ){
? ` ${k} = {${meta[k]}},` str += `@${item.ruler}\n`
: `${ k.match(/[}{]$/) ? k.replace('}','-start}') : `${k},` }\n` + continue;
`${ xrtext.encode( false, meta[k])}\n` + }
`${ k.match(/}$/) ? k.replace('}','-end}') : '}' }\n` str += `@${item.k}\n`
.split(&quot;\n&quot;).filter( s =&gt; s.trim() ).join(&quot;\n&quot;) for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
) str += `}\n`
.join(&quot;\n&quot;) }
} return str
return meta.toString();
}else return `${text}\n${xrtext.encode(false,meta)}`
} }
} }
var {meta,text} = xrtext.decode.text(str) // demultiplex text &amp; bibtex
meta['@foo{'] = { &quot;note&quot;:&quot;note from the user&quot;} // edit metadata
xrtext.encode(text,meta) // multiplex text &amp; bibtex back together
</code></pre> </code></pre>
<p>The above (de)multiplexes text/metadata, expands bibs, (de)serializes bibtex (and all fits more or less on one A4 paper)</p>
<blockquote> <blockquote>
<p>above can be used as a startingpoint for LLVM&rsquo;s to translate/steelman to any language.</p> <p>above can be used as a startingpoint for LLVM&rsquo;s to translate/steelman to a more formal form/language.</p>
</blockquote> </blockquote>
<pre><code>str = `
hello world
@hello@greeting
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text &amp; bibtex
tags.find( (t) =&gt; t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text &amp; bibtex back together
</code></pre>
<pre><code>@{references-start}
@misc{emilyHegland/Edgar&amp;Frod,
author = {Emily Hegland},
title = {Edgar &amp; Frode Hegland, November 2021},
year = {2021},
month = {11},
}
</code></pre>
<p>The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 &amp; 5, as well as the <a href="#core-principle">core principle</a></p>
<pre><code>@visual-meta{
version = {1.1},
generator = {Author 7.6.2 (1064)},
section = {visual-meta-header}
}
@misc{emilyHegland/Edgar&amp;Frod,
author = {Emily Hegland},
title = {Edgar &amp; Frode Hegland, November 2021},
year = {2021},
month = {11},
section = {references}
}
</code></pre>
<h1 id="hyper-copy-paste">HYPER copy/paste</h1> <h1 id="hyper-copy-paste">HYPER copy/paste</h1>
<p>The previous example, offers something exciting compared to simple copy/paste of 3D objects or text. <p>The previous example, offers something exciting compared to simple copy/paste of 3D objects or text.

View file

@ -265,8 +265,8 @@ This allows instant realtime tagging of objects at various scopes:
This empowers the enduser spatial expressiveness (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br> This empowers the enduser spatial expressiveness (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail. The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail.
1. The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: `[text, spatial, text+spatial, supra, omni, infinite]` 1. The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)
1. The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime. 1. The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
> NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with `"class":"house"` or name "house". This multiplexing of id/category is deliberate because of [the core principle](#core-principle). > NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with `"class":"house"` or name "house". This multiplexing of id/category is deliberate because of [the core principle](#core-principle).
@ -280,25 +280,25 @@ The XR Fragment specification bumps the traditional default browser-mimetype
to a green eco-friendly: to a green eco-friendly:
`text/plain;charset=utf-8;bibtex=^@` `text/plain;charset=utf-8;bib=^@`
This indicates that any bibtex metadata starting with `@` will automatically get filtered out and: This indicates that [bibs](https://github.com/coderofsalvation/tagbibs) and [bibtags](https://en.wikipedia.org/wiki/BibTeX) matching regex `^@` will automatically get filtered out, in order to:
* automatically detects textual links between textual and spatial objects * automatically detect links between textual/spatial objects
* detect opiniated bibtag appendices ([visual-meta](https://visual-meta.info) e.g.)
It's concept is similar to literate programming. It's concept is similar to literate programming, which empower local/remote responses to:
Its implications are that local/remote responses can now:
* (de)multiplex/repair human text and requestless metadata (see [the core principle](#core-principle)) * (de)multiplex human text and metadata in one go (see [the core principle](#core-principle))
* no separated implementation/network-overhead for metadata (see [the core principle](#core-principle)) * no network-overhead for metadata (see [the core principle](#core-principle))
* ensuring high FPS: HTML/RDF historically is too 'requesty' for game studios * ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios
* rich send/receive/copy-paste everywhere by default, metadata being retained (see [the core principle](#core-principle)) * rich send/receive/copy-paste everywhere by default, metadata being retained (see [the core principle](#core-principle))
* less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR * netto result: less webservices, therefore less servers, and overall better FPS in XR
> This significantly expands expressiveness and portability of human text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.). > This significantly expands expressiveness and portability of human tagged text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not required by the spec).<br> For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.). To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).
> Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec). > Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).
@ -343,21 +343,20 @@ Example:
+------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+
``` ```
Attaching visualmeta as `src` metadata to the (root) scene-node hints the XR Fragment browser. 3D object names and/or classes map to `name` of visual-meta glossary-entries.
3D object names and classes map to `name` of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects: This allows rich interaction and interlinking between text and 3D objects:
1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it. 1. When the user surfs to https://.../index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.
2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along. 2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.
## BibTeX as lowest common denominator for tagging/triples ## Bibs-enabled BibTeX: lowest common denominator for tagging/triples
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix" > "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br> Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br> It's a missing sensemaking precursor to (eventual) extrospective RDF.<br>
BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.<br> BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.<br>
In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical book-world): In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical world like BibTex):
1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata 1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata
1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later 1. an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later
@ -368,6 +367,7 @@ In that sense, it's one step up from the `.ini` fileformat (which has never leak
| structure | fuzzy (sensemaking) | precise | | structure | fuzzy (sensemaking) | precise |
| space/scope | local | world | | space/scope | local | world |
| everything is text (string) | yes | no | | everything is text (string) | yes | no |
| paperfriendly | [bibs](https://github.com/coderofsalvation/tagbibs) | no |
| leaves (dictated) text intact | yes | no | | leaves (dictated) text intact | yes | no |
| markup language | just an appendix | ~4 different | | markup language | just an appendix | ~4 different |
| polyglot format | no | yes | | polyglot format | no | yes |
@ -383,81 +383,90 @@ In that sense, it's one step up from the `.ini` fileformat (which has never leak
| implementation/network overhead | no | depends | | implementation/network overhead | no | depends |
| used in (physical) books/PDF | yes (visual-meta) | no | | used in (physical) books/PDF | yes (visual-meta) | no |
| terse non-verb predicates | yes | no | | terse non-verb predicates | yes | no |
| nested structures | no | yes | | nested structures | no (but: BibTex rulers) | yes |
## XR Text (w. BibTeX) example parser ## XR Text example parser
Here's a XR Text (de)multiplexer in javascript (which also consumes start/end-blocks like in visual-meta):
1. The XR Fragments spec does not aim to harden the BiBTeX format
2. However, respect multi-line BibTex values because of [the core principle](#core-principle)
3. Expand bibs and rulers (like `${visual-meta-start}`) according to the [tagbibs spec](https://github.com/coderofsalvation/tagbibs)
4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;tag=^@`
Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes:
``` ```
xrtext = { xrtext = {
decode: (str) => { decode: (str) => {
let meta={}, text='', bibtex = [], cur = meta, section = '' // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end
regex= { let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
bibtex: /^@/, let tags = [], text='', i=0, prop=''
section: { start: /@{(\S+)-start}/, suffix: /-(start|end)/}, var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
prop: { key: /=.*?{/ , stop: /},/ }, let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
tag: { start: /^@\S+[{,}]$/, stop: /}/ } for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
}
let reset = () => { bibtex = []; cur = meta } bibtex = lines.join('\n').substr( text.length )
str.split(/\r?\n/).map( (line) => { bibtex.replace( bibs.regex , (m,k,v) => {
if( Object.keys(meta).length == 0 && !line.match(regex.bibtex) ) tok = m.substr(1).split("@")
text += line+'\n' match = tok.shift()
if( line.match(regex.section.start) ) tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` )
section = line.match(regex.section.start) })
if( bibtex.length ){ bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
bibtex.push(line) bibtex.split( pat[0] ).map( (t) => {
token = bibtex.join('') try{
if( token.match( regex.prop.key ) && token.match(/},/) ){ let v = {}
value = token.substr( token.indexOf('{')+1, token.lastIndexOf('}') ) if( !(t = t.trim()) ) return
key = token.replace(/=.*/,'').trim() if( tag = t.match( pat[1] ) ) tag = tag[0]
cur[ key ] = value.replace(regex.prop.stop,'').trim() if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
token = token.lastIndexOf('}') == token.length-1 t = t.substr( tag.length )
? '' t.split( pat[2] )
: token.substr( token.lastIndexOf('},')+2 ) .map( kv => {
bibtex = [ token + ' '] if( !(kv = kv.trim()) || kv == "}" ) return
}else if( token.match(regex.tag.stop) ) reset() v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
}else if( line.trim().match(regex.bibtex) ){ })
bibtex = [' '] tags.push( { k:tag, v } )
key = line.trim().match(regex.tag.start)[0] }catch(e){ console.error(e) }
if( key.match(regex.section.suffix) ) return
cur = ( cur[ key ] = {} )
if( section ){
cur.section = section[0].replace(regex.section.suffix,'')
.replace(/[@}{]/g,'')
}
}
}) })
return {text, meta} return {text, tags}
}, },
encode: (text,meta) => { encode: (text,tags) => {
str = text+"\n" let str = text+"\n"
for( let i in meta ){ for( let i in tags ){
let item = meta[i] let item = tags[i]
str += `${i}\n` if( item.ruler ){
for( let j in item ) str += ` ${j} = {${item[j]}}\n` str += `@${item.ruler}\n`
continue;
}
str += `@${item.k}\n`
for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
str += `}\n` str += `}\n`
} }
return str return str
} }
} }
var {meta,text} = xrtext.decode(str) // demultiplex text & bibtex tags
meta['@foo{'] = { "note":"note from the user"} // edit metadata
out = xrtext.encode(text,meta) // multiplex text & bibtex tags back together
``` ```
The above (de)multiplexes text/metadata, expands bibs, (de)serializes bibtex (and all fits more or less on one A4 paper)
> above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language. > above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.
1. The XR Fragments spec (de)serializes does not aim to harden the BiBTeX format ```
2. Dumb, unnested BiBTeX: always deserialize to a flat lookuptable of tags for speed & simplicity ([core principle](#core-principle)) str = `
3. multi-line BibTex values should be supported hello world
4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;bibtex=^@`
5. Be strict in sending (`encode()`) Dumb Bibtex (start/stop-section becomes a property) (*) @hello@greeting
6. Be liberal in receiving, hence a relatively bigger `decode()` (also supports [visual-meta](https://visual-meta.info) start/stop-sections e.g.) @{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex
tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
```
``` ```
@{references-start} @{references-start}

View file

@ -3,7 +3,7 @@
Internet Engineering Task Force L.R. van Kammen Internet Engineering Task Force L.R. van Kammen
Internet-Draft 5 September 2023 Internet-Draft 6 September 2023
Intended status: Informational Intended status: Informational
@ -40,7 +40,7 @@ Status of This Memo
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on 8 March 2024. This Internet-Draft will expire on 9 March 2024.
Copyright Notice Copyright Notice
@ -53,7 +53,7 @@ Copyright Notice
van Kammen Expires 8 March 2024 [Page 1] van Kammen Expires 9 March 2024 [Page 1]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -76,17 +76,17 @@ Table of Contents
8. Text in XR (tagging,linking to spatial objects) . . . . . . . 6 8. Text in XR (tagging,linking to spatial objects) . . . . . . . 6
8.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 9 8.1. Default Data URI mimetype . . . . . . . . . . . . . . . . 9
8.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 10 8.2. URL and Data URI . . . . . . . . . . . . . . . . . . . . 10
8.3. BibTeX as lowest common denominator for tagging/ 8.3. Bibs-enabled BibTeX: lowest common denominator for tagging/
triples . . . . . . . . . . . . . . . . . . . . . . . . . 11 triples . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.4. XR Text (w. BibTeX) example parser . . . . . . . . . . . 13 8.4. XR Text example parser . . . . . . . . . . . . . . . . . 13
9. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 14 9. HYPER copy/paste . . . . . . . . . . . . . . . . . . . . . . 15
10. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 15 10. XR Fragment queries . . . . . . . . . . . . . . . . . . . . . 16
10.1. including/excluding . . . . . . . . . . . . . . . . . . 15 10.1. including/excluding . . . . . . . . . . . . . . . . . . 16
10.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 16 10.2. Query Parser . . . . . . . . . . . . . . . . . . . . . . 17
10.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . 17 10.3. XR Fragment URI Grammar . . . . . . . . . . . . . . . . 18
11. Security Considerations . . . . . . . . . . . . . . . . . . . 17 11. Security Considerations . . . . . . . . . . . . . . . . . . . 18
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17 13. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18
1. Introduction 1. Introduction
@ -109,7 +109,7 @@ Table of Contents
van Kammen Expires 8 March 2024 [Page 2] van Kammen Expires 9 March 2024 [Page 2]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -165,7 +165,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 8 March 2024 [Page 3] van Kammen Expires 9 March 2024 [Page 3]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -192,6 +192,10 @@ Internet-Draft XR Fragments September 2023
| | is a person who lives in oklahoma") | | | is a person who lives in oklahoma") |
+---------------+---------------------------------------------+ +---------------+---------------------------------------------+
| &#9723; | ascii representation of an 3D object/mesh | | &#9723; | ascii representation of an 3D object/mesh |
+---------------+---------------------------------------------+
| (un)obtrusive | obtrusive: wrapping human text/thought in |
| | XML/HTML/JSON obfuscates human text into a |
| | salad of machine-symbols and words |
+---------------+---------------------------------------------+ +---------------+---------------------------------------------+
Table 1 Table 1
@ -215,17 +219,17 @@ Internet-Draft XR Fragments September 2023
| | | | or class mapping) | | | | | or class mapping) |
+----------+---------+--------------+----------------------------+ +----------+---------+--------------+----------------------------+
Table 2
| xyz coordinates are similar to ones found in SVG Media Fragments
van Kammen Expires 9 March 2024 [Page 4]
van Kammen Expires 8 March 2024 [Page 4]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
Table 2
| xyz coordinates are similar to ones found in SVG Media Fragments
5. List of metadata for 3D nodes 5. List of metadata for 3D nodes
+=======+========+================+============================+ +=======+========+================+============================+
@ -269,19 +273,21 @@ Internet-Draft XR Fragments September 2023
| | | |
+--------------------------------------------------------+ +--------------------------------------------------------+
van Kammen Expires 9 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
An XR Fragment-compatible browser viewing this scene, allows the end- An XR Fragment-compatible browser viewing this scene, allows the end-
user to interact with the buttonA and buttonB. user to interact with the buttonA and buttonB.
In case of buttonA the end-user will be teleported to another In case of buttonA the end-user will be teleported to another
location and time in the *current loaded scene*, but buttonB will location and time in the *current loaded scene*, but buttonB will
*replace the current scene* with a new one, like other.fbx. *replace the current scene* with a new one, like other.fbx.
van Kammen Expires 8 March 2024 [Page 5]
Internet-Draft XR Fragments September 2023
7. Embedding 3D content 7. Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects Here's an ascii representation of a 3D scene-graph with 3D objects
@ -324,20 +330,19 @@ Internet-Draft XR Fragments September 2023
Ideally metadata must come *later with* text, but not *obfuscate* the Ideally metadata must come *later with* text, but not *obfuscate* the
text, or *in another* file. text, or *in another* file.
van Kammen Expires 9 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
| Humans first, machines (AI) later (core principle (#core- | Humans first, machines (AI) later (core principle (#core-
| principle) | principle)
This way: This way:
van Kammen Expires 8 March 2024 [Page 6]
Internet-Draft XR Fragments September 2023
1. XR Fragments allows <b id="tagging-text">hasslefree XR text 1. XR Fragments allows <b id="tagging-text">hasslefree XR text
tagging</b>, using BibTeX metadata *at the end of content* (like tagging</b>, using BibTeX metadata *at the end of content* (like
visual-meta (https://visual.meta.info)). visual-meta (https://visual.meta.info)).
@ -384,12 +389,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 9 March 2024 [Page 7]
van Kammen Expires 8 March 2024 [Page 7]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -445,17 +445,17 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 8 March 2024 [Page 8] van Kammen Expires 9 March 2024 [Page 8]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
1. The XR Browser needs to offer a global setting/control to adjust 1. The XR Browser needs to adjust tag-scope based on the endusers
tag-scope with at least range: [text, spatial, text+spatial, needs/focus (infinite tagging only makes sense when environment
supra, omni, infinite] is scaled down significantly)
2. The XR Browser should always allow the human to view/edit the 2. The XR Browser should always allow the human to view/edit the
BibTex metadata manually, by clicking 'toggle metadata' on the metadata, by clicking 'toggle metadata' on the 'back'
'back' (contextmenu e.g.) of any XR text, anywhere anytime. (contextmenu e.g.) of any XR text, anywhere anytime.
| NOTE: infinite matches both 'house' and 'houses' in text, as well | NOTE: infinite matches both 'house' and 'houses' in text, as well
| as spatial objects with "class":"house" or name "house". This | as spatial objects with "class":"house" or name "house". This
@ -473,48 +473,48 @@ Internet-Draft XR Fragments September 2023
to a green eco-friendly: to a green eco-friendly:
text/plain;charset=utf-8;bibtex=^@ text/plain;charset=utf-8;bib=^@
This indicates that any bibtex metadata starting with @ will This indicates that bibs (https://github.com/coderofsalvation/
automatically get filtered out and: tagbibs) and bibtags (https://en.wikipedia.org/wiki/BibTeX) matching
regex ^@ will automatically get filtered out, in order to:
* automatically detects textual links between textual and spatial * automatically detect links between textual/spatial objects
objects * detect opiniated bibtag appendices (visual-meta (https://visual-
meta.info) e.g.)
It's concept is similar to literate programming. Its implications It's concept is similar to literate programming, which empower local/
are that local/remote responses can now: remote responses to:
* (de)multiplex/repair human text and requestless metadata (see the * (de)multiplex human text and metadata in one go (see the core
core principle (#core-principle)) principle (#core-principle))
* no separated implementation/network-overhead for metadata (see the * no network-overhead for metadata (see the core principle (#core-
core principle (#core-principle)) principle))
* ensuring high FPS: HTML/RDF historically is too 'requesty' for * ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy'
game studios for game studios
* rich send/receive/copy-paste everywhere by default, metadata being * rich send/receive/copy-paste everywhere by default, metadata being
retained (see the core principle (#core-principle)) retained (see the core principle (#core-principle))
* less network requests, therefore less webservices, therefore less * netto result: less webservices, therefore less servers, and
servers, and overall better FPS in XR overall better FPS in XR
van Kammen Expires 9 March 2024 [Page 9]
van Kammen Expires 8 March 2024 [Page 9]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
| This significantly expands expressiveness and portability of human | This significantly expands expressiveness and portability of human
| text, by *postponing machine-concerns to the end of the human | tagged text, by *postponing machine-concerns to the end of the
| text* in contrast to literal interweaving of content and | human text* in contrast to literal interweaving of content and
| markupsymbols (or extra network requests, webservices e.g.). | markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not For all other purposes, regular mimetypes can be used (but are not
required by the spec). required by the spec).
To keep XR Fragments a lightweight spec, BibTeX is used for text- To keep XR Fragments a lightweight spec, BibTeX is used for text/
spatial object mappings (not a scripting language or RDF e.g.). spatial tagging (not a scripting language or RDF e.g.).
| Applications are also free to attach any JSON(LD / RDF) to spatial | Applications are also free to attach any JSON(LD / RDF) to spatial
| objects using custom properties (but is not interpreted by this | objects using custom properties (but is not interpreted by this
@ -557,7 +557,7 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 8 March 2024 [Page 10] van Kammen Expires 9 March 2024 [Page 10]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
@ -575,32 +575,30 @@ Internet-Draft XR Fragments September 2023
| }` | | }` |
+------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+
Attaching visualmeta as src metadata to the (root) scene-node hints 3D object names and/or classes map to name of visual-meta glossary-
the XR Fragment browser. 3D object names and classes map to name of entries. This allows rich interaction and interlinking between text
visual-meta glossary-entries. This allows rich interaction and and 3D objects:
interlinking between text and 3D objects:
1. When the user surfs to https://.../index.gltf#AI the XR 1. When the user surfs to https://.../index.gltf#rentalhouse the XR
Fragments-parser points the enduser to the AI object, and can Fragments-parser points the enduser to the rentalhouse object,
show contextual info about it. and can show contextual info about it.
2. When (partial) remote content is embedded thru XR Fragment 2. When (partial) remote content is embedded thru XR Fragment
queries (see XR Fragment queries), its related visual-meta can be queries (see XR Fragment queries), indirectly related metadata
embedded along. can be embedded along.
8.3. BibTeX as lowest common denominator for tagging/triples 8.3. Bibs-enabled BibTeX: lowest common denominator for tagging/triples
| "When a car breaks down, the ones *without* turbosupercharger are | "When a car breaks down, the ones *without* turbosupercharger are
| easier to fix" | easier to fix"
Unlike XML or JSON, the typeless, unnested, everything-is-text nature Unlike XML or JSON, the typeless, unnested, everything-is-text nature
of BibTeX tags is a great advantage for introspection. of BibTeX tags is a great advantage for introspection.
In a way, the RDF project should welcome it as a missing sensemaking It's a missing sensemaking precursor to (eventual) extrospective RDF.
precursor to (eventual) extrospective RDF.
BibTeX-appendices are already used in the digital AND physical world BibTeX-appendices are already used in the digital AND physical world
(academic books, visual-meta (https://visual-meta.info)), perhaps due (academic books, visual-meta (https://visual-meta.info)), perhaps due
to its terseness & simplicity. to its terseness & simplicity.
In that sense, it's one step up from the .ini fileformat (which has In that sense, it's one step up from the .ini fileformat (which has
never leaked into the physical book-world): never leaked into the physical world like BibTex):
1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by 1. <b id="frictionless-copy-paste">frictionless copy/pasting</b> (by
humans) of (unobtrusive) content AND metadata humans) of (unobtrusive) content AND metadata
@ -613,157 +611,219 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 8 March 2024 [Page 11]
van Kammen Expires 9 March 2024 [Page 11]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
+====================+=================+=================+ +================+=====================================+===============+
| characteristic | UTF8 Plain Text | RDF | |characteristic |UTF8 Plain Text (with BibTeX) |RDF |
| | (with BibTeX) | | +================+=====================================+===============+
+====================+=================+=================+ |perspective |introspective |extrospective |
| perspective | introspective | extrospective | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |structure |fuzzy (sensemaking) |precise |
| structure | fuzzy | precise | +----------------+-------------------------------------+---------------+
| | (sensemaking) | | |space/scope |local |world |
+--------------------+-----------------+-----------------+ +----------------+-------------------------------------+---------------+
| space/scope | local | world | |everything is |yes |no |
+--------------------+-----------------+-----------------+ |text (string) | | |
| everything is text | yes | no | +----------------+-------------------------------------+---------------+
| (string) | | | |paperfriendly |bibs |no |
+--------------------+-----------------+-----------------+ | |(https://github.com/coderofsalvation/| |
| leaves (dictated) | yes | no | | |tagbibs) | |
| text intact | | | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |leaves |yes |no |
| markup language | just an | ~4 different | |(dictated) text | | |
| | appendix | | |intact | | |
+--------------------+-----------------+-----------------+ +----------------+-------------------------------------+---------------+
| polyglot format | no | yes | |markup language |just an appendix |~4 different |
+--------------------+-----------------+-----------------+ +----------------+-------------------------------------+---------------+
| easy to copy/paste | yes | up to | |polyglot format |no |yes |
| content+metadata | | application | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |easy to copy/ |yes |up to |
| easy to write/ | yes | depends | |paste | |application |
| repair for layman | | | |content+metadata| | |
+--------------------+-----------------+-----------------+ +----------------+-------------------------------------+---------------+
| easy to | yes (fits on A4 | depends | |easy to write/ |yes |depends |
| (de)serialize | paper) | | |repair for | | |
+--------------------+-----------------+-----------------+ |layman | | |
| infrastructure | selfcontained | (semi)networked | +----------------+-------------------------------------+---------------+
| | (plain text) | | |easy to |yes (fits on A4 paper) |depends |
+--------------------+-----------------+-----------------+ |(de)serialize | | |
| freeform tagging/ | yes, terse | yes, verbose | +----------------+-------------------------------------+---------------+
| annotation | | | |infrastructure |selfcontained (plain text) |(semi)networked|
+--------------------+-----------------+-----------------+ +----------------+-------------------------------------+---------------+
| can be appended to | yes | up to | |freeform |yes, terse |yes, verbose |
| text-content | | application | |tagging/ | | |
+--------------------+-----------------+-----------------+ |annotation | | |
| copy-paste text | yes | up to | +----------------+-------------------------------------+---------------+
| preserves metadata | | application | |can be appended |yes |up to |
+--------------------+-----------------+-----------------+ |to text-content | |application |
| emoji | yes | depends on | +----------------+-------------------------------------+---------------+
| | | encoding | |copy-paste text |yes |up to |
+--------------------+-----------------+-----------------+ |preserves | |application |
| predicates | free | semi pre- | |metadata | | |
| | | determined | +----------------+-------------------------------------+---------------+
van Kammen Expires 8 March 2024 [Page 12] van Kammen Expires 9 March 2024 [Page 12]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
+--------------------+-----------------+-----------------+ |emoji |yes |depends on |
| implementation/ | no | depends | | | |encoding |
| network overhead | | | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |predicates |free |semi pre- |
| used in (physical) | yes (visual- | no | | | |determined |
| books/PDF | meta) | | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |implementation/ |no |depends |
| terse non-verb | yes | no | |network overhead| | |
| predicates | | | +----------------+-------------------------------------+---------------+
+--------------------+-----------------+-----------------+ |used in |yes (visual-meta) |no |
| nested structures | no | yes | |(physical) | | |
+--------------------+-----------------+-----------------+ |books/PDF | | |
+----------------+-------------------------------------+---------------+
|terse non-verb |yes |no |
|predicates | | |
+----------------+-------------------------------------+---------------+
|nested |no (but: BibTex rulers) |yes |
|structures | | |
+----------------+-------------------------------------+---------------+
Table 5 Table 5
8.4. XR Text (w. BibTeX) example parser 8.4. XR Text example parser
Here's a naive XR Text (de)multiplexer in javascript (which also 1. The XR Fragments spec does not aim to harden the BiBTeX format
supports visual-meta start/end-blocks): 2. However, respect multi-line BibTex values because of the core
principle (#core-principle)
3. Expand bibs and rulers (like ${visual-meta-start}) according to
the tagbibs spec (https://github.com/coderofsalvation/tagbibs)
4. BibTeX snippets should always start in the beginning of a line
(regex: ^@), hence mimetype text/plain;charset=utf-8;tag=^@
Here's an XR Text (de)multiplexer in javascript, which ticks all the
above boxes:
xrtext = { xrtext = {
decode: { decode: (str) => {
text: (str) => { // bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end
let meta={}, text='', last='', data = ''; let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
str.split(/\r?\n/).map( (line) => { let tags = [], text='', i=0, prop=''
if( !data ) data = last === '' && line.match(/^@/) ? line[0] : '' var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
if( data ){ let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
if( line === '' ){ for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
xrtext.decode.bibtex(data.substr(1),meta)
data='' bibtex = lines.join('\n').substr( text.length )
}else data += `${line}\n` bibtex.replace( bibs.regex , (m,k,v) => {
} tok = m.substr(1).split("@")
text += data ? '' : `${line}\n`
last=line
})
return {text, meta}
},
bibtex: (str,meta) => {
let st = [meta]
str
.split(/\r?\n/ )
.map( s => s.trim() ).join("\n") // be nice
.replace( /}@/, "}\n@" ) // to authors
.replace( /},}/, "},\n}" ) // which struggle
.replace( /^}/, "\n}" ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c => c.trim() ) // actual processing:
van Kammen Expires 8 March 2024 [Page 13] van Kammen Expires 9 March 2024 [Page 13]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
.map( (s) => { match = tok.shift()
if( s.match(/(^}|-end})/) && st.length > 1 ) st.shift() tok.map( (t) => bibs.tags[match] = `@${t}{${match},\n}\n` )
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} ) })
else s.replace( /(\w+)\s*=\s*{(.*)}(,)?/g, (m,k,v) => st[0][k] = v ) bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
bibtex.split( pat[0] ).map( (t) => {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv => {
if( !(kv = kv.trim()) || kv == "}" ) return
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
}) })
return meta return {text, tags}
}
}, },
encode: (text,meta) => { encode: (text,tags) => {
if( text === false ){ let str = text+"\n"
if (typeof meta === "object") { for( let i in tags ){
return Object.keys(meta).map(k => let item = tags[i]
typeof meta[k] == "string" if( item.ruler ){
? ` ${k} = {${meta[k]}},` str += `@${item.ruler}\n`
: `${ k.match(/[}{]$/) ? k.replace('}','-start}') : `${k},` }\n` + continue;
`${ xrtext.encode( false, meta[k])}\n` + }
`${ k.match(/}$/) ? k.replace('}','-end}') : '}' }\n` str += `@${item.k}\n`
.split("\n").filter( s => s.trim() ).join("\n") for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
) str += `}\n`
.join("\n") }
} return str
return meta.toString();
}else return `${text}\n${xrtext.encode(false,meta)}`
} }
} }
var {meta,text} = xrtext.decode.text(str) // demultiplex text & bibtex The above (de)multiplexes text/metadata, expands bibs, (de)serializes
meta['@foo{'] = { "note":"note from the user"} // edit metadata bibtex (and all fits more or less on one A4 paper)
xrtext.encode(text,meta) // multiplex text & bibtex back together
| above can be used as a startingpoint for LLVM's to translate/ | above can be used as a startingpoint for LLVM's to translate/
| steelman to any language. | steelman to a more formal form/language.
van Kammen Expires 9 March 2024 [Page 14]
Internet-Draft XR Fragments September 2023
str = `
hello world
@hello@greeting
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex
tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
@{references-start}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
}
The above BibTeX-flavor can be imported, however will be rewritten to
Dumb BibTeX, to satisfy rule 2 & 5, as well as the core principle
(#core-principle)
@visual-meta{
version = {1.1},
generator = {Author 7.6.2 (1064)},
section = {visual-meta-header}
}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
section = {references}
}
9. HYPER copy/paste 9. HYPER copy/paste
@ -774,18 +834,17 @@ xrtext.encode(text,meta) // multiplex text & bibtex ba
ways: ways:
1. time/space: 3D object (current animation-loop) 1. time/space: 3D object (current animation-loop)
2. text: TeXt object (including BibTeX/visual-meta if any)
3. interlinked: Collected objects by visual-meta tag
van Kammen Expires 9 March 2024 [Page 15]
van Kammen Expires 8 March 2024 [Page 14]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
2. text: TeXt object (including BibTeX/visual-meta if any)
3. interlinked: Collected objects by visual-meta tag
10. XR Fragment queries 10. XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings: Include, exclude, hide/shows objects using space-separated strings:
@ -831,17 +890,17 @@ Internet-Draft XR Fragments September 2023
| | property) | | | property) |
+----------+-------------------------------------------------+ +----------+-------------------------------------------------+
| - | removes/hides object(s) | | - | removes/hides object(s) |
+----------+-------------------------------------------------+
| : | indicates an object-embedded custom property |
| | key/value |
van Kammen Expires 8 March 2024 [Page 15] van Kammen Expires 9 March 2024 [Page 16]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
+----------+-------------------------------------------------+
| : | indicates an object-embedded custom property |
| | key/value |
+----------+-------------------------------------------------+ +----------+-------------------------------------------------+
| . | alias for "class" :".foo" equals class:foo | | . | alias for "class" :".foo" equals class:foo |
+----------+-------------------------------------------------+ +----------+-------------------------------------------------+
@ -886,18 +945,18 @@ Internet-Draft XR Fragments September 2023
10. then strip key-operator: convert "-foo" into "foo" 10. then strip key-operator: convert "-foo" into "foo"
11. add operator and value to rule-array 11. add operator and value to rule-array
12. therefore we we set id to true or false (false=excluder -) 12. therefore we we set id to true or false (false=excluder -)
13. and we set root to true or false (true=/ root selector is
present)
14. we convert key '/foo' into 'foo'
van Kammen Expires 8 March 2024 [Page 16] van Kammen Expires 9 March 2024 [Page 17]
Internet-Draft XR Fragments September 2023 Internet-Draft XR Fragments September 2023
13. and we set root to true or false (true=/ root selector is
present)
14. we convert key '/foo' into 'foo'
15. finally we add the key/value to the store like store.foo = 15. finally we add the key/value to the store like store.foo =
{id:false,root:true} e.g. {id:false,root:true} e.g.
@ -946,7 +1005,4 @@ Internet-Draft XR Fragments September 2023
van Kammen Expires 9 March 2024 [Page 18]
van Kammen Expires 8 March 2024 [Page 17]

View file

@ -129,6 +129,11 @@ This also means that the repair-ability of machine-matters should be human frien
<td><tt></tt></td> <td><tt></tt></td>
<td>ascii representation of an 3D object/mesh</td> <td>ascii representation of an 3D object/mesh</td>
</tr> </tr>
<tr>
<td>(un)obtrusive</td>
<td>obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words</td>
</tr>
</tbody> </tbody>
</table></section> </table></section>
@ -342,8 +347,8 @@ Ideally metadata must come <strong>later with</strong> text, but not <strong>obf
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by <eref target="https://visual-meta.info">visual-meta</eref> in greater detail.</t> The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by <eref target="https://visual-meta.info">visual-meta</eref> in greater detail.</t>
<ol spacing="compact"> <ol spacing="compact">
<li>The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: <tt>[text, spatial, text+spatial, supra, omni, infinite]</tt></li> <li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li> <li>The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li>
</ol> </ol>
<blockquote><t>NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with <tt>&quot;class&quot;:&quot;house&quot;</tt> or name &quot;house&quot;. This multiplexing of id/category is deliberate because of <eref target="#core-principle">the core principle</eref>.</t> <blockquote><t>NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with <tt>&quot;class&quot;:&quot;house&quot;</tt> or name &quot;house&quot;. This multiplexing of id/category is deliberate because of <eref target="#core-principle">the core principle</eref>.</t>
</blockquote> </blockquote>
@ -352,26 +357,26 @@ The simplicity of appending BibTeX 'tags' (humans first, machines later) is also
<t>The XR Fragment specification bumps the traditional default browser-mimetype</t> <t>The XR Fragment specification bumps the traditional default browser-mimetype</t>
<t><tt>text/plain;charset=US-ASCII</tt></t> <t><tt>text/plain;charset=US-ASCII</tt></t>
<t>to a green eco-friendly:</t> <t>to a green eco-friendly:</t>
<t><tt>text/plain;charset=utf-8;bibtex=^@</tt></t> <t><tt>text/plain;charset=utf-8;bib=^@</tt></t>
<t>This indicates that any bibtex metadata starting with <tt>@</tt> will automatically get filtered out and:</t> <t>This indicates that <eref target="https://github.com/coderofsalvation/tagbibs">bibs</eref> and <eref target="https://en.wikipedia.org/wiki/BibTeX">bibtags</eref> matching regex <tt>^@</tt> will automatically get filtered out, in order to:</t>
<ul spacing="compact"> <ul spacing="compact">
<li>automatically detects textual links between textual and spatial objects</li> <li>automatically detect links between textual/spatial objects</li>
<li>detect opiniated bibtag appendices (<eref target="https://visual-meta.info">visual-meta</eref> e.g.)</li>
</ul> </ul>
<t>It's concept is similar to literate programming. <t>It's concept is similar to literate programming, which empower local/remote responses to:</t>
Its implications are that local/remote responses can now:</t>
<ul spacing="compact"> <ul spacing="compact">
<li>(de)multiplex/repair human text and requestless metadata (see <eref target="#core-principle">the core principle</eref>)</li> <li>(de)multiplex human text and metadata in one go (see <eref target="#core-principle">the core principle</eref>)</li>
<li>no separated implementation/network-overhead for metadata (see <eref target="#core-principle">the core principle</eref>)</li> <li>no network-overhead for metadata (see <eref target="#core-principle">the core principle</eref>)</li>
<li>ensuring high FPS: HTML/RDF historically is too 'requesty' for game studios</li> <li>ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <eref target="#core-principle">the core principle</eref>)</li> <li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <eref target="#core-principle">the core principle</eref>)</li>
<li>less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR</li> <li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
</ul> </ul>
<blockquote><t>This significantly expands expressiveness and portability of human text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t> <blockquote><t>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t>
</blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br /> </blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br />
To keep XR Fragments a lightweight spec, BibTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</t> To keep XR Fragments a lightweight spec, BibTeX is used for text/spatial tagging (not a scripting language or RDF e.g.).</t>
<blockquote><t>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</t> <blockquote><t>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</t>
</blockquote></section> </blockquote></section>
@ -410,25 +415,24 @@ The XR Fragment-compatible browser can let the enduser access visual-meta(data)-
| }` | | }` |
+------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+
</artwork> </artwork>
<t>Attaching visualmeta as <tt>src</tt> metadata to the (root) scene-node hints the XR Fragment browser. <t>3D object names and/or classes map to <tt>name</tt> of visual-meta glossary-entries.
3D object names and classes map to <tt>name</tt> of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:</t> This allows rich interaction and interlinking between text and 3D objects:</t>
<ol spacing="compact"> <ol spacing="compact">
<li>When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.</li> <li>When the user surfs to https://.../index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li> <li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.</li>
</ol> </ol>
</section> </section>
<section anchor="bibtex-as-lowest-common-denominator-for-tagging-triples"><name>BibTeX as lowest common denominator for tagging/triples</name> <section anchor="bibs-enabled-bibtex-lowest-common-denominator-for-tagging-triples"><name>Bibs-enabled BibTeX: lowest common denominator for tagging/triples</name>
<blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t> <blockquote><t>&quot;When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix&quot;</t>
</blockquote><t>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br /> </blockquote><t>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br />
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br /> It's a missing sensemaking precursor to (eventual) extrospective RDF.<br />
BibTeX-appendices are already used in the digital AND physical world (academic books, <eref target="https://visual-meta.info">visual-meta</eref>), perhaps due to its terseness &amp; simplicity.<br /> BibTeX-appendices are already used in the digital AND physical world (academic books, <eref target="https://visual-meta.info">visual-meta</eref>), perhaps due to its terseness &amp; simplicity.<br />
In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has never leaked into the physical book-world):</t> In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has never leaked into the physical world like BibTex):</t>
<ol spacing="compact"> <ol spacing="compact">
<li>&lt;b id=&quot;frictionless-copy-paste&quot;&gt;frictionless copy/pasting&lt;/b&gt; (by humans) of (unobtrusive) content AND metadata</li> <li>&lt;b id=&quot;frictionless-copy-paste&quot;&gt;frictionless copy/pasting&lt;/b&gt; (by humans) of (unobtrusive) content AND metadata</li>
@ -468,6 +472,12 @@ In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has nev
<td>no</td> <td>no</td>
</tr> </tr>
<tr>
<td>paperfriendly</td>
<td><eref target="https://github.com/coderofsalvation/tagbibs">bibs</eref></td>
<td>no</td>
</tr>
<tr> <tr>
<td>leaves (dictated) text intact</td> <td>leaves (dictated) text intact</td>
<td>yes</td> <td>yes</td>
@ -560,77 +570,115 @@ In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has nev
<tr> <tr>
<td>nested structures</td> <td>nested structures</td>
<td>no</td> <td>no (but: BibTex rulers)</td>
<td>yes</td> <td>yes</td>
</tr> </tr>
</tbody> </tbody>
</table></section> </table></section>
<section anchor="xr-text-w-bibtex-example-parser"><name>XR Text (w. BibTeX) example parser</name> <section anchor="xr-text-example-parser"><name>XR Text example parser</name>
<t>Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):</t>
<ol spacing="compact">
<li>The XR Fragments spec does not aim to harden the BiBTeX format</li>
<li>However, respect multi-line BibTex values because of <eref target="#core-principle">the core principle</eref></li>
<li>Expand bibs and rulers (like <tt>${visual-meta-start}</tt>) according to the <eref target="https://github.com/coderofsalvation/tagbibs">tagbibs spec</eref></li>
<li>BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype <tt>text/plain;charset=utf-8;tag=^@</tt></li>
</ol>
<t>Here's an XR Text (de)multiplexer in javascript, which ticks all the above boxes:</t>
<artwork>xrtext = { <artwork>xrtext = {
decode: { decode: (str) =&gt; {
text: (str) =&gt; { // bibtex: ↓@ ↓&lt;tag|tag{phrase,|{ruler}&gt; ↓property ↓end
let meta={}, text='', last='', data = ''; let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
str.split(/\r?\n/).map( (line) =&gt; { let tags = [], text='', i=0, prop=''
if( !data ) data = last === '' &amp;&amp; line.match(/^@/) ? line[0] : '' var bibs = { regex: /(@[a-zA-Z0-9_+]+@[a-zA-Z0-9_@]+)/g, tags: {}}
if( data ){ let lines = str.replace(/\r?\n/g,'\n').split(/\n/)
if( line === '' ){ for( let i = 0; !lines[i].match( /^@/ ); i++ ) text += lines[i]+'\n'
xrtext.decode.bibtex(data.substr(1),meta)
data='' bibtex = lines.join('\n').substr( text.length )
}else data += `${line}\n` bibtex.replace( bibs.regex , (m,k,v) =&gt; {
} tok = m.substr(1).split(&quot;@&quot;)
text += data ? '' : `${line}\n` match = tok.shift()
last=line tok.map( (t) =&gt; bibs.tags[match] = `@${t}{${match},\n}\n` )
})
bibtex = Object.values(bibs.tags).join('\n') + bibtex.replace( bibs.regex, '')
bibtex.split( pat[0] ).map( (t) =&gt; {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv =&gt; {
if( !(kv = kv.trim()) || kv == &quot;}&quot; ) return
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf(&quot;{&quot;)+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
}) })
return {text, meta} return {text, tags}
},
bibtex: (str,meta) =&gt; {
let st = [meta]
str
.split(/\r?\n/ )
.map( s =&gt; s.trim() ).join(&quot;\n&quot;) // be nice
.replace( /}@/, &quot;}\n@&quot; ) // to authors
.replace( /},}/, &quot;},\n}&quot; ) // which struggle
.replace( /^}/, &quot;\n}&quot; ) // with writing single-line BibTeX
.split( /\n/ ) //
.filter( c =&gt; c.trim() ) // actual processing:
.map( (s) =&gt; {
if( s.match(/(^}|-end})/) &amp;&amp; st.length &gt; 1 ) st.shift()
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} )
else s.replace( /(\w+)\s*=\s*{(.*)}(,)?/g, (m,k,v) =&gt; st[0][k] = v )
})
return meta
}
}, },
encode: (text,meta) =&gt; { encode: (text,tags) =&gt; {
if( text === false ){ let str = text+&quot;\n&quot;
if (typeof meta === &quot;object&quot;) { for( let i in tags ){
return Object.keys(meta).map(k =&gt; let item = tags[i]
typeof meta[k] == &quot;string&quot; if( item.ruler ){
? ` ${k} = {${meta[k]}},` str += `@${item.ruler}\n`
: `${ k.match(/[}{]$/) ? k.replace('}','-start}') : `${k},` }\n` + continue;
`${ xrtext.encode( false, meta[k])}\n` + }
`${ k.match(/}$/) ? k.replace('}','-end}') : '}' }\n` str += `@${item.k}\n`
.split(&quot;\n&quot;).filter( s =&gt; s.trim() ).join(&quot;\n&quot;) for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
) str += `}\n`
.join(&quot;\n&quot;) }
} return str
return meta.toString();
}else return `${text}\n${xrtext.encode(false,meta)}`
} }
} }
var {meta,text} = xrtext.decode.text(str) // demultiplex text &amp; bibtex
meta['@foo{'] = { &quot;note&quot;:&quot;note from the user&quot;} // edit metadata
xrtext.encode(text,meta) // multiplex text &amp; bibtex back together
</artwork> </artwork>
<blockquote><t>above can be used as a startingpoint for LLVM's to translate/steelman to any language.</t> <t>The above (de)multiplexes text/metadata, expands bibs, (de)serializes bibtex (and all fits more or less on one A4 paper)</t>
</blockquote></section> <blockquote><t>above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.</t>
</blockquote>
<artwork>str = `
hello world
@hello@greeting
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text &amp; bibtex
tags.find( (t) =&gt; t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text &amp; bibtex back together
</artwork>
<artwork>@{references-start}
@misc{emilyHegland/Edgar&amp;Frod,
author = {Emily Hegland},
title = {Edgar &amp; Frode Hegland, November 2021},
year = {2021},
month = {11},
}
</artwork>
<t>The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 &amp; 5, as well as the <eref target="#core-principle">core principle</eref></t>
<artwork>@visual-meta{
version = {1.1},
generator = {Author 7.6.2 (1064)},
section = {visual-meta-header}
}
@misc{emilyHegland/Edgar&amp;Frod,
author = {Emily Hegland},
title = {Edgar &amp; Frode Hegland, November 2021},
year = {2021},
month = {11},
section = {references}
}
</artwork>
</section>
</section> </section>
<section anchor="hyper-copy-paste"><name>HYPER copy/paste</name> <section anchor="hyper-copy-paste"><name>HYPER copy/paste</name>