NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
principle | XR 4D URL | HTML 2D URL |
---|---|---|
the XRWG | wordgraph (collapses 3D scene to tags) | Ctrl-F (find) |
the hashbus | hashtags map to camera/scene-projections | hashtags map to document positions |
spacetime hashtags | positions camera, triggers scene-preset/time | jumps/scrolls to chapter |
src metadata | renders content and offers sourceportation | renders content |
href metadata | teleports to other XR document | jumps to other HTML document |
href metadata | repositions camera or animation-range | jumps to camera |
href metadata | draws visible connection(s) for XRWG 'tag' | |
href metadata | triggers predefined view | Media fragments |
XR Fragments does not look at XR (or the web) thru the lens of HTML.
But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective:
Example: ://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100
Demo | Explanation |
---|---|
pos=1,2,3 | vector/coordinate argument e.g. |
pos=1,2,3&rot=0,90,0&q=.foo | combinators |
this is already implemented in all browsers
fragment | type | example | info |
---|---|---|---|
#pos | vector3 | #pos=0.5,0,0 | positions camera (or XR floor) to xyz-coord 0.5,0,0, |
#rot | vector3 | #rot=0,90,0 | rotates camera to xyz-coord 0.5,0,0 |
#t | vector2 | #t=500,1000 | sets animation-loop range between frame 500 and 1000 |
#...... | string | #.cubes #cube | predefined views, XRWG fragments and ID fragments |
xyz coordinates are similar to ones found in SVG Media Fragments
key | type | example (JSON) | function | existing compatibility |
---|---|---|---|---|
name | string | "name": "cube" | identify/tag | object supported in all 3D fileformats & scenes |
tag | string | "tag": "cubes geo" | tag object | custom property in 3D fileformats |
href | string | "href": "b.gltf" | XR teleport | custom property in 3D fileformats |
src | string | "src": "#cube" | XR embed / teleport | custom property in 3D fileformats |
NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.
It also allows sourceportation, which basically means the enduser can teleport to the original XR Document of an src embedded object, and see a visible connection to the particular embedded object.
fragment | type | functionality |
---|---|---|
<b>#pos</b>=0,0,0 | vector3 | (re)position camera |
<b>#t</b>=0,100 | vector2 | (re)position looprange of scene-animation or src-mediacontent |
<b>#rot</b>=0,90,0 | vector3 | rotate camera |
Example URL: ://foo/world.gltf#cube&pos=0,0,0
fragment | type | example value |
---|---|---|
src | string (uri, hashtag/query) | #cube #sometag #q=-ball_inside_cube<br>#q=-/sky -rain<br>#q=-.language .english<br>#q=price:>2 price:<5`<br>https://linux.org/penguin.png https://linux.world/distrowatch.gltf#t=1,100 linuxapp://conference/nixworkshop/apply.gltf#q=flyer androidapp://page1?tutorial#pos=0,0,1&t1,100 |
Instead of cherrypicking objects with #bass&tuna thru src, queries can be used to import the whole scene (and filter out certain objects). See next chapter below.
fragment | type | example value |
---|---|---|
href | string (uri or predefined view) | #pos=1,1,0 #pos=1,1,0&rot=90,0,0 ://somefile.gltf#pos=1,1,0 |
Rule of thumb: visible placeholder objects act as a '3D canvas' for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).
REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)
TODO: needs intermediate visuals to make things more obvious
example | outcome |
---|---|
#q=-sky | show everything except object named sky |
#q=-tag:language tag:english | hide everything with tag language, but show all tag english objects |
#q=price:>2 price:<5 | of all objects with property price, show only objects with value between 2 and 5 |
operator | info |
---|---|
- | removes/hides object(s) |
: | indicates an object-embedded custom property key/value |
> < | compare float or int number |
/ | reference to root-scene. Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by src) (*) |
* = #q=-/cube hides object cube only in the root-scene (not nested cube objects)
#q=-cube hides both object cube in the root-scene <b>AND</b> nested skybox objects |
An example query-parser (which compiles to many languages) can be found here
The XR Fragments does this by collapsing space into a Word Graph (the XRWG example ), augmented by Bib(s)Tex.
Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see further motivation here )
the #john@baroque-bib associates both text John and objectname john, with tag baroque
both #john@baroque-bib and BibTex @baroque{john} result in the same XRWG, however on top of that 2 tages (house and todo) are now associated with text/objectname/tag 'baroque'.
URL example | Result |
---|---|
https://my.com/foo.gltf#baroque | draws lines between mesh john, 3D mesh castle, text John built(..) |
https://my.com/foo.gltf#john | draws lines between mesh john, and the text John built (..) |
https://my.com/foo.gltf#house | draws lines between mesh castle, and other objects with tag house or todo |
hashtagbibs potentially allow the enduser to annotate text/objects by speaking/typing/scanning associations, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: https://y.io/z.fbx#@baroque@todo e.g.
The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by visual-meta in greater detail.
for more info on this mimetype see bibs
This significantly expands expressiveness and portability of human tagged text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
additional tagging using bibs : to tag spatial object note_canvas with 'todo', the enduser can type or speak #note_canvas@todo
above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.
when an XR browser updates the human text, a quick scan for nonmatching tags (@book{nonmatchingbook e.g.) should be performed and prompt the enduser for deleting them.
definition | explanation |
---|---|
human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
XR fragment | URI Fragment with spatial hints like #pos=0,0,0&t=1,100 e.g. |
the XRWG | wordgraph (collapses 3D scene to tags) |
the hashbus | hashtags map to camera/scene-projections |
spacetime hashtags | positions camera, triggers scene-preset/time |
teleportation | repositioning the enduser to a different position (or 3D scene/file) |
sourceportation | teleporting the enduser to the original XR Document of an src embedded object. |
placeholder object | a 3D object which with src-metadata (which will be replaced by the src-data.) |
src | (HTML-piggybacked) metadata of a 3D object which instances content |
href | (HTML-piggybacked) metadata of a 3D object which links to content |
query | an URI Fragment-operator which queries object(s) from a scene like #q=cube |
visual-meta | |
requestless metadata | metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) |
FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
introspective | inward sensemaking ("I feel this belongs to that") |
extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
◻ | ascii representation of an 3D object/mesh |
(un)obtrusive | obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words |
BibTeX | simple tagging/citing/referencing standard for plaintext |
BibTag | a BibTeX tag |
(hashtag)bibs | an easy to speak/type/scan tagging SDL ( |