diff --git a/doc/RFC_XR_Fragments.html b/doc/RFC_XR_Fragments.html index 820c6b7..5626cc4 100644 --- a/doc/RFC_XR_Fragments.html +++ b/doc/RFC_XR_Fragments.html @@ -92,12 +92,12 @@ XR Fragments allows us to enrich existing dataformats, by recursive use of exist
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there’s many attempts to create the ultimate markuplanguage or 3D fileformat.
-However, thru the lens of authoring, their lowest common denominator is still: plain text.
+Their lowest common denominator is: (co)authoring using plain text.
XR Fragments allows us to enrich/connect existing dataformats, by recursive use of existing technologies:
@@ -113,109 +113,18 @@ This also means that the repair-ability of machine-matters should be human frien-“When a car breaks down, the ones without turbosupercharger are easier to fix”
Let’s always focus on average humans: the ‘fuzzy symbolical mind’ must be served first, before serving the greater ‘categorized typesafe RDF hive mind’).
+Let’s always focus on average humans: our fuzzy symbolical mind must be served first, before serving a greater categorized typesafe RDF hive mind).
+Humans first, machines (AI) later.
Thererfore, XR Fragments does not look at XR (or the web) thru the lens of HTML.
+XR Fragments itself is HTML-agnostic, though pseudo-XR Fragment browsers can be implemented on top of HTML/Javascript.
definition | -explanation | -
---|---|
human | -a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) | -
scene | -a (local/remote) 3D scene or 3D file (index.gltf e.g.) | -
3D object | -an object inside a scene characterized by vertex-, face- and customproperty data. | -
metadata | -custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) | -
XR fragment | -URI Fragment with spatial hints like #pos=0,0,0&t=1,100 e.g. |
-
src | -(HTML-piggybacked) metadata of a 3D object which instances content | -
href | -(HTML-piggybacked) metadata of a 3D object which links to content | -
query | -an URI Fragment-operator which queries object(s) from a scene like #q=cube |
-
visual-meta | -visual-meta data appended to text/books/papers which is indirectly visible/editable in XR. | -
requestless metadata | -opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). | -
FPS | -frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible | -
introspective | -inward sensemaking (“I feel this belongs to that”) | -
extrospective | -outward sensemaking (“I’m fairly sure John is a person who lives in oklahoma”) | -
◻ |
-ascii representation of an 3D object/mesh | -
(un)obtrusive | -obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words | -
BibTeX | -simple tagging/citing/referencing standard for plaintext | -
BibTag | -a BibTeX tag | -
See appendix below in case certain terms are not clear.
Popular compatible 3D fileformats: .gltf
, .obj
, .fbx
, .usdz
, .json
(THREEjs), COLLADA
and so on.
Popular compatible 3D fileformats: .gltf
, .obj
, .fbx
, .usdz
, .json
(THREE.js), .dae
and so on.
NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.
@@ -958,7 +867,109 @@ Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/shareAcknowledgments
-TODO acknowledge.
+ + +Appendix: Definitions
+ ++ +
+ + + + +definition +explanation ++ + +human +a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) ++ + +scene +a (local/remote) 3D scene or 3D file (index.gltf e.g.) ++ + +3D object +an object inside a scene characterized by vertex-, face- and customproperty data. ++ + +metadata +custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) ++ + +XR fragment +URI Fragment with spatial hints like +#pos=0,0,0&t=1,100
e.g.+ + +src +(HTML-piggybacked) metadata of a 3D object which instances content ++ + +href +(HTML-piggybacked) metadata of a 3D object which links to content ++ + +query +an URI Fragment-operator which queries object(s) from a scene like +#q=cube
+ + +visual-meta +visual-meta data appended to text/books/papers which is indirectly visible/editable in XR. ++ + +requestless metadata +metadata which never spawns new requests (unlike RDF/HTML, which can cause framerate-dropping, hence not used a lot in games) ++ + +FPS +frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible ++ + +introspective +inward sensemaking (“I feel this belongs to that”) ++ + +extrospective +outward sensemaking (“I’m fairly sure John is a person who lives in oklahoma”) ++ + ++ ◻
ascii representation of an 3D object/mesh ++ + +(un)obtrusive +obtrusive: wrapping human text/thought in XML/HTML/JSON obfuscates human text into a salad of machine-symbols and words ++ + +BibTeX +simple tagging/citing/referencing standard for plaintext ++ + +BibTag +a BibTeX tag +