From 94b89f7a9cef6950debb264cc2dfa2a83e2c4148 Mon Sep 17 00:00:00 2001 From: Leon van Kammen Date: Mon, 4 Sep 2023 21:21:52 +0200 Subject: [PATCH] update documentation --- doc/RFC_XR_Fragments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/RFC_XR_Fragments.md b/doc/RFC_XR_Fragments.md index 58f8c87..0e00a84 100644 --- a/doc/RFC_XR_Fragments.md +++ b/doc/RFC_XR_Fragments.md @@ -335,7 +335,7 @@ This allows rich interaction and interlinking between text and 3D objects: 1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it. 2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along. -## BibTeX as lowest common denominator for tagging/triple +## BibTeX as lowest common denominator for tagging/triples The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective). BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness & simplicity: