41 KiB
%%% Title = "XR Fragments" area = "Internet" workgroup = "Internet Engineering Task Force"
[seriesInfo] name = "XR-Fragments" value = "draft-XRFRAGMENTS-leonvankammen-00" stream = "IETF" status = "informational"
date = 2023-04-12T00:00:00Z
author initials="L.R." surname="van Kammen" fullname="L.R. van Kammen"
%%%
.# Abstract
This draft is a specification for 4D URLs & navigation, which links together space, time & text together, for hypermedia browsers with- or without a network-connection.
The specification promotes spatial addressibility, sharing, navigation, query-ing and annotating interactive (text)objects across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like URI Fragments and BibTags notation.
Almost every idea in this document is demonstrated at https://xrfragment.org
{mainmatter}
Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
Their lowest common denominator is: (co)authoring using plain text.
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:
- addressibility and navigation of 3D scenes/objects: URI Fragments + src/href spatial metadata
- Interlinking text/& 3D by collapsing space into a Word Graph (XRWG) (and augmenting text with bibs / BibTags appendices (see visual-meta e.g.)
- extend the hashtag-to-browser-viewport paradigm beyond 2D documents (XR documents)
NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
Core principle
XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).
XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.
Instead of combining them (in a game-editor e.g.), XR Fragments is opting for a more integrated path towards them, by describing how to make browsers 4D URL-ready:
principle | XR 4D URL | HTML 2D URL |
---|---|---|
the XRWG | wordgraph (collapses 3D scene to tags) | Ctrl-F (find) |
the hashbus | hashtags map to camera/scene-projections | hashtags map to document positions |
spacetime hashtags | positions camera, triggers scene-preset/time | jumps/scrolls to chapter |
XR Fragments does not look at XR (or the web) thru the lens of HTML.
But approaches things from a higherlevel browser-perspective:
+----------------------------------------------------------------------------------------------+
| |
| the soul of any URL: ://macro /meso ?micro #nano |
| |
| 2D URL: ://library.com /document ?search #chapter |
| |
| 4D URL: ://park.com /4Dscene.fbx --> ?search --> #view ---> hashbus |
| │ | |
| XRWG <---------------------<------------+ |
| │ | |
| ├─ objects --------------->------------| |
| └─ text --------------->------------+ |
| |
| |
+----------------------------------------------------------------------------------------------+
Traditional webbrowsers can become 4D document-ready by:
- loading 3D assets (gltf/fbx e.g.) natively (not thru HTML).
- allowing assets to publish hashtags to themselves (the scene) using the hashbus (like hashtags controlling the scrollbar).
- collapsing the 3D scene to an wordgraph (for essential navigation purposes) controllable thru a hash(tag)bus
XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers can be implemented on top of HTML/Javascript.
Conventions and Definitions
See appendix below in case certain terms are not clear.
XR Fragment URI Grammar
reserved = gen-delims / sub-delims
gen-delims = "#" / "&"
sub-delims = "," / "="
Example:
://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100
Demo | Explanation |
---|---|
pos=1,2,3 |
vector/coordinate argument e.g. |
pos=1,2,3&rot=0,90,0&q=.foo |
combinators |
this is already implemented in all browsers
List of URI Fragments
fragment | type | example | info |
---|---|---|---|
#pos |
vector3 | #pos=0.5,0,0 |
positions camera (or XR floor) to xyz-coord 0.5,0,0, |
#rot |
vector3 | #rot=0,90,0 |
rotates camera to xyz-coord 0.5,0,0 |
#t |
vector2 | #t=500,1000 |
sets animation-loop range between frame 500 and 1000 |
#...... |
string | #.cubes #cube |
predefined views, XRWG fragments and ID fragments |
xyz coordinates are similar to ones found in SVG Media Fragments
List of metadata for 3D nodes
key | type | example (JSON) | info |
---|---|---|---|
name |
string | "name": "cube" |
available in all 3D fileformats & scenes |
tag |
string | "tag": "cubes geo" |
available through custom property in 3D fileformats |
href |
string | "href": "b.gltf" |
available through custom property in 3D fileformats |
src |
string | "src": "#cube" |
available through custom property in 3D fileformats |
Popular compatible 3D fileformats: .gltf
, .obj
, .fbx
, .usdz
, .json
(THREE.js), .dae
and so on.
NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.
Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D objects ◻
and their metadata:
+--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc)
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the buttonA
and buttonB
.
In case of buttonA
the end-user will be teleported to another location and time in the current loaded scene, but buttonB
will
replace the current scene with a new one, like other.fbx
.
Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects ◻
which embeds remote & local 3D objects ◻
with/out using queries:
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #canvas |
| │ |
| └── ◻ livingroom |
| └ src: #canvas |
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects painting.png
onto the (plane) object called canvas
(which is copy-instanced in the bed and livingroom).
Also, after lazy-loading ocean.com/aquarium.gltf
, only the queried objects bass
and tuna
will be instanced inside aquariumcube
.
Resizing will be happen accordingly to its placeholder object aquariumcube
, see chapter Scaling.
Instead of cherrypicking objects with
#bass&tuna
thrusrc
, queries can be used to import the whole scene (and filter out certain objects). See next chapter below.
XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings:
example | outcome |
---|---|
#q=-sky |
show everything except object named sky |
#q=-.language .english |
hide everything with tag language , but show all tag english objects |
#q=price:>2 price:<5 |
of all objects with property price , show only objects with value between 2 and 5 |
It's simple but powerful syntax which allows css-like tag/id-selectors with a searchengine prompt-style feeling:
- queries are a way to traverse a scene, and filter objects based on their tag- or property-values.
- words starting with
.
like.german
match tag-metadata of 3D objects like"tag":"german"
- words starting with
.
like.german
match tag-metadata of (BibTeX) tags in XR Text objects like@german{KarlHeinz, ...
e.g.
For example:
#q=.foo
is a shorthand for#q=tag:foo
, which will select objects with custom propertytag
:foo
. Just a simple#q=cube
will simply select an object namedcube
.
including/excluding
operator | info |
---|---|
- |
removes/hides object(s) |
: |
indicates an object-embedded custom property key/value |
. |
alias for "tag" :".foo" equals tag:foo |
> < |
compare float or int number |
/ |
reference to root-scene. Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by src ) (*) |
* =
#q=-/cube
hides objectcube
only in the root-scene (not nestedcube
objects)
#q=-cube
hides both objectcube
in the root-scene AND nestedskybox
objects |
» example implementation » example 3D asset » discussion
Query Parser
Here's how to write a query parser:
- create an associative array/object to store query-arguments as objects
- detect object id's & properties
foo:1
andfoo
(reference regex:/^.*:[><=!]?/
) - detect excluders like
-foo
,-foo:1
,-.foo
,-/foo
(reference regex:/^-/
) - detect root selectors like
/foo
(reference regex:/^[-]?\//
) - detect tag selectors like
.foo
(reference regex:/^[-]?tag$/
) - detect number values like
foo:1
(reference regex:/^[0-9\.]+$/
) - expand aliases like
.foo
intotag:foo
- for every query token split string on
:
- create an empty array
rules
- then strip key-operator: convert "-foo" into "foo"
- add operator and value to rule-array
- therefore we we set
id
totrue
orfalse
(false=excluder-
) - and we set
root
totrue
orfalse
(true=/
root selector is present) - we convert key '/foo' into 'foo'
- finally we add the key/value to the store like
store.foo = {id:false,root:true}
e.g.
An example query-parser (which compiles to many languages) can be found here
Embedding content (src-instancing)
src
is the 3D version of the iframe.
It instances content (in objects) in the current scene/asset.
fragment | type | example value |
---|---|---|
src |
string (uri or [[predefined view | predefined_view]] or [[query |
- local/remote content is instanced by the
src
(query) value (and attaches it to the placeholder mesh containing thesrc
property) - local
src
values (URL starting with#
, like#cube&foo
) means only the mentioned objectnames will be copied to the instanced scene (from the current scene) while preserving their names (to support recursive selectors). (example code) - local
src
values indicating a query (#q=
), means that all included objects (from the current scene) will be copied to the instanced scene (before applying the query) while preserving their names (to support recursive selectors). (example code) - the instanced scene (from a
src
value) should be scaled accordingly to its placeholder object or scaled relatively based on the scale-property (of a geometry-less placeholder, an 'empty'-object in blender e.g.). For more info see Chapter Scaling. - external
src
(file) values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are: - when the placeholder object is a 2D plane, but the mimetype is 3D, then render the spatial content on that plane via a stencil buffer.
- when only one object was cherrypicked (
#cube
e.g.), set its position to0,0,0
model/gltf+json
image/png
image/jpg
text/plain;charset=utf-8;bib=^@
» example implementation
» example 3D asset
» discussion
Referencing content (href portals)
navigation, portals & mutations
fragment | type | example value |
---|---|---|
href |
string (uri or predefined view) | #pos=1,1,0 #pos=1,1,0&rot=90,0,0 ://somefile.gltf#pos=1,1,0 |
-
clicking an ''external''- or ''file URI'' fully replaces the current scene and assumes
pos=0,0,0&rot=0,0,0
by default (unless specified) -
relocation/reorientation should happen locally for local URI's (
#pos=....
) -
navigation should not happen ''immediately'' when user is more than 2 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)
-
URL navigation should always be reflected in the client (in case of javascript: see [here for an example navigator).
-
In XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [here for an example wearable)
-
in case of navigating to a new [[pos)ition, ''first'' navigate to the ''current position'' so that the ''back-button'' of the ''browser-history'' always refers to the previous position (see [here)
-
portal-rendering: a 2:1 ratio texture-material indicates an equirectangular projection
» example implementation
» example 3D asset
» discussion
Scaling instanced content
Sometimes embedded properties (like href or src) instance new objects.
But what about their scale?
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?
Rule of thumb: visible placeholder objects act as a '3D canvas' for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).
- IF an embedded property (
src
e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):
- calculate the bounding box of the ''placeholder'' object (maxsize=1.4 e.g.)
- hide the ''placeholder'' object (material e.g.)
- instance the
src
scene as a child of the existing object - calculate the bounding box of the instanced scene, and scale it accordingly (to 1.4 e.g.)
REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)
- ELSE multiply the scale-vector of the instanced scene with the scale-vector of the placeholder object.
TODO: needs intermediate visuals to make things more obvious
Text in XR (tagging,linking to spatial objects)
How does XR Fragments interlink text with objects?
The XR Fragments does this by collapsing space into a Word Graph (the XRWG), augmented by Bib(s)Tex.
Instead of just throwing together all kinds media types into one experience (games), what about the intrinsic connections between them?
Why is HTML adopted less in games outside the browser?
Through the lens of game-making, ideally metadata must come with that text, but not obfuscate the text, or spawning another request to fetch it.
XR Fragments does this by detecting Bib(s)Tex, without introducing a new language or fileformat
Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see further motivation here)
Hence:
- XR Fragments promotes (de)serializing a scene to the XRWG
- XR Fragments primes the XRWG, by collecting words from the
tag
and name-property of 3D objects. - XR Fragments primes the XRWG, by collecting words from optional metadata at the end of content of text (see default mimetype & Data URI)
- Bib's and BibTex are first tag citizens for priming the XRWG with words (from XR text)
- Like Bibs, XR Fragments generalizes the BibTex author/title-semantics (
author{title}
) into this points to that (this{that}
) - The XRWG should be recalculated when textvalues (in
src
) change - HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer)
- Applications don't have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.
- The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)
Example:
http://y.io/z.fbx | Derived XRWG (shown as BibTex)
----------------------------------------------------------------------------+--------------------------------------
| @house{castle,
+-[src: data:.....]----------------------+ +-[3D mesh]-+ | url = {https://y.io/z.fbx#castle}
| Chapter one | | / \ | | }
| | | / \ | | @baroque{castle,
| John built houses in baroque style. | | / \ | | url = {https://y.io/z.fbx#castle}
| | | |_____| | | }
| #john@baroque | +-----│-----+ | @baroque{john}
| | │ |
| | ├─ name: castle |
| | └─ tag: house baroque |
+----------------------------------------+ |
[3D mesh ] |
| O ├─ name: john |
| /|\ | |
| / \ | |
+--------+ |
the
#john@baroque
-bib associates both textJohn
and objectnamejohn
, with tagbaroque
Another example:
http://y.io/z.fbx | Derived XRWG (printed as BibTex)
----------------------------------------------------------------------------+--------------------------------------
|
+-[src: data:.....]----------------------+ +-[3D mesh]-+ | @house{castle,
| Chapter one | | / \ | | url = {https://y.io/z.fbx#castle}
| | | / \ | | }
| John built houses in baroque style. | | / \ | | @baroque{castle,
| | | |_____| | | url = {https://y.io/z.fbx#castle}
| #john@baroque | +-----│-----+ | }
| @baroque{john} | │ | @baroque{john}
| | ├─ name: castle |
| | └─ tag: house baroque |
+----------------------------------------+ | @house{baroque}
[3D mesh ] | @todo{baroque}
+-[remotestorage.io / localstorage]------+ | O + name: john |
| #baroque@todo@house | | /|\ | |
| ... | | / \ | |
+----------------------------------------+ +--------+ |
both
#john@baroque
-bib and BibTex@baroque{john}
result in the same XRWG, however on top of that 2 tages (house
andtodo
) are now associated with text/objectname/tag 'baroque'.
As seen above, the XRWG can expand bibs (and the whole scene) to BibTeX.
This allows hasslefree authoring and copy-paste of associations for and by humans, but also makes these URLs possible:
URL example | Result |
---|---|
https://my.com/foo.gltf#.baroque |
highlights mesh john , 3D mesh castle , text John built(..) |
https://my.com/foo.gltf#john |
highlights mesh john , and the text John built (..) |
https://my.com/foo.gltf#house |
highlights mesh castle , and other objects with tag house or todo |
hashtagbibs potentially allow the enduser to annotate text/objects by speaking/typing/scanning associations, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on:
https://y.io/z.fbx#@baroque@todo
e.g.
The XRWG allows XR Browsers to show/hide relationships in realtime at various levels:
- wordmatch inside
src
text - wordmatch inside
href
text - wordmatch object-names
- wordmatch object-tagnames
Spatial wires can be rendered, words/objects can be highlighted/scaled etc.
Some pointers for good UX (but not necessary to be XR Fragment compatible):
- The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)
- The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
- respect multi-line BiBTeX metadata in text because of the core principle
- Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see the core principle).
- anti-pattern: hardcoupling an XR Browser with a mandatory markup/scripting-language which departs from onubtrusive plain text (HTML/VRML/Javascript) (see the core principle)
- anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.
The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by visual-meta in greater detail.
Fictional chat:
<John> Hey what about this: https://my.com/station.gltf#pos=0,0,1&rot=90,2,0&t=500,1000
<Sarah> I'm checking it right now
<Sarah> I don't see everything..where's our text from yesterday?
<John> Ah wait, that's tagged with tag 'draft' (and hidden)..hold on, try this:
<John> https://my.com/station.gltf#.draft&pos=0,0,1&rot=90,2,0&t=500,1000
<Sarah> how about we link the draft to the upcoming YELLO-event?
<John> ok I'm adding #draft@YELLO
<Sarah> Yesterday I also came up with other usefull assocations between other texts in the scene:
#event#YELLO
#2025@YELLO
<John> thanks, added.
<Sarah> Btw. I stumbled upon this spatial book which references station.gltf in some chapters:
<Sarah> https://thecommunity.org/forum/foo/mytrainstory.txt
<John> interesting, I'm importing mytrainstory.txt into station.gltf
<John> ah yes, chapter three points to trainterminal_2A in the scene, cool
Default Data URI mimetype
The src
-values work as expected (respecting mime-types), however:
The XR Fragment specification bumps the traditional default browser-mimetype
text/plain;charset=US-ASCII
to a hashtagbib(tex)-friendly one:
text/plain;charset=utf-8;bib=^@
This indicates that:
- utf-8 is supported by default
- lines beginning with
@
will not be rendered verbatim by default (read more) - the XRWG should expand bibs to BibTex occurring in text (
#contactjohn@todo@important
e.g.)
By doing so, the XR Browser (applications-layer) can interpret microformats (visual-meta to connect text further with its environment ( setup links between textual/spatial objects automatically e.g.).
for more info on this mimetype see bibs
Advantages:
- auto-expanding of hashtagbibs associations
- out-of-the-box (de)multiplex human text and metadata in one go (see the core principle)
- no network-overhead for metadata (see the core principle)
- ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios
- rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle)
- netto result: less webservices, therefore less servers, and overall better FPS in XR
This significantly expands expressiveness and portability of human tagged text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not required by the spec).
URL and Data URI
+--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @book{greatgatsby |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human\n@book{sunday...}` | | } |
| | +------------------------+
| |
+--------------------------------------------------------------+
The enduser will only see welcome human
and Hello friends
rendered verbatim (see mimetype).
The beauty is that text in Data URI automatically promotes rich copy-paste (retaining metadata).
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas').
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).
additional tagging using bibs: to tag spatial object
note_canvas
with 'todo', the enduser can type or speak#note_canvas@todo
XR Text example parser
To prime the XRWG with text from plain text src
-values, here's an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):
xrtext = {
expandBibs: (text) => {
let bibs = { regex: /(#[a-zA-Z0-9_+@\-]+(#)?)/g, tags: {}}
text.replace( bibs.regex , (m,k,v) => {
tok = m.substr(1).split("@")
match = tok.shift()
if( tok.length ) tok.map( (t) => bibs.tags[t] = `@${t}{${match},\n}` )
else if( match.substr(-1) == '#' )
bibs.tags[match] = `@{${match.replace(/#/,'')}}`
else bibs.tags[match] = `@${match}{${match},\n}`
})
return text.replace( bibs.regex, '') + Object.values(bibs.tags).join('\n')
},
decode: (str) => {
// bibtex: ↓@ ↓<tag|tag{phrase,|{ruler}> ↓property ↓end
let pat = [ /@/, /^\S+[,{}]/, /},/, /}/ ]
let tags = [], text='', i=0, prop=''
let lines = xrtext.expandBibs(str).replace(/\r?\n/g,'\n').split(/\n/)
for( let i = 0; i < lines.length && !String(lines[i]).match( /^@/ ); i++ )
text += lines[i]+'\n'
bibtex = lines.join('\n').substr( text.length )
bibtex.split( pat[0] ).map( (t) => {
try{
let v = {}
if( !(t = t.trim()) ) return
if( tag = t.match( pat[1] ) ) tag = tag[0]
if( tag.match( /^{.*}$/ ) ) return tags.push({ruler:tag})
if( tag.match( /}$/ ) ) return tags.push({k: tag.replace(/}$/,''), v: {}})
t = t.substr( tag.length )
t.split( pat[2] )
.map( kv => {
if( !(kv = kv.trim()) || kv == "}" ) return
v[ kv.match(/\s?(\S+)\s?=/)[1] ] = kv.substr( kv.indexOf("{")+1 )
})
tags.push( { k:tag, v } )
}catch(e){ console.error(e) }
})
return {text, tags}
},
encode: (text,tags) => {
let str = text+"\n"
for( let i in tags ){
let item = tags[i]
if( item.ruler ){
str += `@${item.ruler}\n`
continue;
}
str += `@${item.k}\n`
for( let j in item.v ) str += ` ${j} = {${item.v[j]}}\n`
str += `}\n`
}
return str
}
}
The above functions (de)multiplexe text/metadata, expands bibs, (de)serialize bibtex and vice versa
above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.
str = `
hello world
here are some hashtagbibs followed by bibtex:
#world
#hello@greeting
#another-section#
@{some-section}
@flap{
asdf = {23423}
}`
var {tags,text} = xrtext.decode(str) // demultiplex text & bibtex
tags.find( (t) => t.k == 'flap{' ).v.asdf = 1 // edit tag
tags.push({ k:'bar{', v:{abc:123} }) // add tag
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
This expands to the following (hidden by default) BibTex appendix:
hello world
here are some hashtagbibs followed by bibtex:
@{some-section}
@flap{
asdf = {1}
}
@world{world,
}
@greeting{hello,
}
@{another-section}
@bar{
abc = {123}
}
when an XR browser updates the human text, a quick scan for nonmatching tags (
@book{nonmatchingbook
e.g.) should be performed and prompt the enduser for deleting them.
Security Considerations
Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can :
- filter out sensitive data when copy/pasting (XR text with
tag:secret
e.g.)
IANA Considerations
This document has no IANA actions.
Acknowledgments
Appendix: Definitions
definition | explanation |
---|---|
human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
XR fragment | URI Fragment with spatial hints like #pos=0,0,0&t=1,100 e.g. |
the XRWG | wordgraph (collapses 3D scene to tags) |
the hashbus | hashtags map to camera/scene-projections |
spacetime hashtags | positions camera, triggers scene-preset/time |
placeholder object | a 3D object which with src-metadata (which will be replaced by the src-data.) |
src | (HTML-piggybacked) metadata of a 3D object which instances content |
href | (HTML-piggybacked) metadata of a 3D object which links to content |
query | an URI Fragment-operator which queries object(s) from a scene like #q=cube |
visual-meta | visual-meta data appended to text/books/papers which is indirectly visible/editable in XR. |
requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
introspective | inward sensemaking ("I feel this belongs to that") |
extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
◻ |
ascii representation of an 3D object/mesh |
(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
BibTeX | simple tagging/citing/referencing standard for plaintext |
BibTag | a BibTeX tag |
(hashtag)bibs | an easy to speak/type/scan tagging SDL (see here which expands to BibTex/JSON/XML |