As I see it, one can use the constructor app to create scans and also export those in two different file formats, one being .srb and one being .obj.
The former is mostly unreadable though:
tango
version 1.0
element vertex 73380
element normal 73380
element color 73380
element face 127292
end_header
ëQˆ¿‰5–¿)\ϾëQˆ¿®—¿wq;œ†¿®—¿)\Ͼ<.ï¾q=º¿HáZ¿{î¾¢»¿HáZ¿{î¾q=º¿ÜJ[¿{î¾ff¶¿‘4]¿ëmö¾ff¶¿HáZ¿xï¾…ëÁ¿43S¿{î¾_Ûÿ43S¿{î¾…ëÁ¿¤-V¿{î¾{¾¿Ë‘W¿ª¿ð¾{¾¿43S¿%õ¾q=º¿43S¿ú‹ý¾ff¶¿43S¿£pý¾…¶¿43S¿£pý¾ff¶¿¹
T¿Soï¾…ëÁ¿…K¿{î¾ÿÄ¿…K¿pñ¾{¾¿…K¿–õ¾q=º¿…K¿Êâþ¾ff¶¿…K¿£pý¾B¯·¿…K¿ZGï¾…ëÁ¿×C¿{î¾áÿ×C¿Èò¾{¾¿×C¿¹%ö¾q=º¿×C¿´Mþ¾ff¶¿×C¿£pý¾:·¿×C¿ï¾…ëÁ¿ö(<¿{¿ö(<¿cò¾{¾¿ö(<¿[ó¾q=º¿ö(<¿Ýþ¾ff¶¿ö(<¿£pý¾/}·¿ö(<¿{î¾ff¶¿‹üA¿{î¾ †·¿ö(<¿8ƒó¾ff¶¿ö(<¿{î¾…ëÁ¿—g9¿Õ¨ñ¾{¾¿âz4¿{î¾h|À¿âz4¿"Oó¾q=º¿âz4¿ïþ¾ff¶¿âz4¿£pý¾Ã%·¿âz4¿{渿âz4¿ \ú¾ff¶¿âz4¿ið¾{¾¿ÍÌ,¿{î¾_¿¿ÍÌ,¿Êñ¾q=º¿ÍÌ,¿Üþ¾ff¶¿ÍÌ,¿£pý¾à綿ÍÌ,¿{î¾=Š¹¿ÍÌ,¿÷û¾ff¶¿ÍÌ,¿²ÿî¾{¾¿¹%¿{î¾:þ¾¿¹%¿5öð¾q=º¿¹%¿ŸXþ¾ff¶¿¹%¿£pý¾u?·¿¹%¿{î¾ï¹¿¹%¿ðöû¾ff¶¿¹%¿{î¾{¾¿©œ!¿ºÖð¾q=º¿¤p¿{î¾Âd¼¿¤p¿RÌý¾ff¶¿¤p¿£pý¾Æ¼¶¿¤p¿{î¾ú¹¿¤p¿bõü¾ff¶¿¤p¿qcî¾{¾¿Â¿{]¾¿Â¿{î¾{¾¿wv¿{î¾q=º¿°¿{î¾»½¿Â¿"¹ý¾ff¶¿Â¿£pý¾áª¶¿Â¿9ý¾ff¶¿Â¿{î¾{¾¿ât¿£pý¾ff¶¿s¿·v¿q=j¿×£€?gf¿Åsj¿×£€?gf¿R6o¿˜™y?™¿q=j¿˜™y?¸…¿…ëq¿˜™y?ªW¿…ëq¿×£€?¾X ¿]b¿˜™y?†¯¿]b¿×£€?gN
At the same time, the .obj file is readable but to my knowledge does not include a scale (which it only could in form of a comment). An example looks like this:
#tango v 73380 f 127292
v -1.0649999 -1.1735088 -0.405 0.0 0.0 0.0 1.0
v -1.0649999 -1.185 -0.4012563 0.0 0.0 0.0 1.0
v -1.0516545 -1.185 -0.405 0.0 0.0 0.0 1.0
v -0.46714962 -1.455 -0.855 0.29803923 0.30980393 0.31764707 1.0
v -0.465 -1.4619029 -0.855 0.29803923 0.30980393 0.31764707 1.0
v -0.465 -1.455 -0.856611 0.29803923 0.30980393 0.31764707 1.0
v -0.465 -1.425 -0.86408335 0.24313726 0.25882354 0.27450982 1.0
v -0.48130736 -1.425 -0.855 0.3137255 0.32156864 0.32941177 1.0
v -0.46771282 -1.515 -0.82500005 0.3019608 0.31764707 0.33333334 1.0
v -0.465 -1.5301322 -0.82500005 0.3137255 0.32156864 0.3372549 1.0
v -0.465 -1.515 -0.8366339 0.3019608 0.31764707 0.33333334 1.0
v -0.465 -1.485 -0.8420684 0.29803923 0.30980393 0.32156864 1.0
v -0.47021228 -1.485 -0.82500005 0.3019608 0.3137255 0.32156864 1.0
[...]
My goal is to have the data displayed in these files and the scale so that for example I can get information about the size of my scan etc. Does anyone know how to include the scale in the .obj file or how to read the .srb file?
(I've already had a look at How do I export Point Cloud Data Project Tango, but did not quite see a clear solution. I also found two apps from Chucknology, but they don't seem to work on my tango device (at least I can't access the expored ADF from my computer).
After getting Feedback from the Tango Support, I'm happy to share that the scale is not lost, but preserved even when exporting to the .obj file format: It's simply in meters (i.e. 1.041235 would be roughly 104 cm)
Related
I have two 1920x1080 PNG files, center.png and right.png, which are identical except that the image in right.png is shifted horizontally by 325 pixels.
With mlt XML, I made a two-second long video using the lossless FFV1 format, showing one second of center.png, and then one second of right.png. Here's my file, foo.mlt:
<?xml version='1.0' encoding='utf-8'?>
<mlt>
<profile width="1920" height="1080"
display_aspect_num="1920" display_aspect_den="1080"
sample_aspect_num="1" sample_aspect_den="1"
colorspace="709" progressive="1"
frame_rate_num="30" frame_rate_den="1"/>
<consumer mlt_service="avformat" properties="lossless/FFV1" target="out.mkv"/>
<producer id="center" mlt_service="qimage" resource="center.png" length="30"/>
<producer id="right" mlt_service="qimage" resource="right.png" length="30"/>
<playlist>
<entry producer="center"/>
<entry producer="right"/>
</playlist>
</mlt>
Then I run melt foo.mlt at the terminal and check the output file out.mkv in my video viewer. However, on close inspection, when right.png appears in the video, it is slightly distorted (with some sort of halo-type artifact). Here is a magnified view:
Weirdly, only the right.png image is distorted; the center.png displays correctly, even though the two images are identical except for positioning.
Is this a bug? I wouldn't expect any image distortion with a lossless codec, but maybe I've done something wrong.
My specs:
Ubuntu 20.04.2 LTS, 64-bit
melt 6.25.0
ffmpeg version 4.2.4-1ubuntu0.1
Thanks
"Lossless" is not always lossless. In the case of "lossless/FFV1", the chroma format is 4:2:2 which will cut the chroma resolution in half:
https://github.com/mltframework/mlt/blob/master/presets/consumer/avformat/lossless/FFV1
This could be relevant because if format conversion is required, it could cause chroma bleeding on pixels that do not land directly on a chroma sample.
As an experiment, you could try shifting the image by 326 pixels instead of 325 to see if the bleeding still occurs.
There was another question a while back that related to bleeding due to chroma subsampling:
Melt composite transition is slightly blending
I'm trying to set a variable to an array in Escape Velocity like they do in the documentation:
#set ($my = "blah")
#set ($say = ["not", $my, "fault"])
However, I get the following error:
error: An error occurred in the #AutoProtoModel processor while processing com.google.protobuf.contrib.autoprotomodel.prototype.AlbumModel:
com.google.escapevelocity.ParseException: Expected an expression, on line 46, at text starting: ["not", $my, "fault"...
com.google.escapevelocity.Parser.parseException(Parser.java:1093)
com.google.escapevelocity.Parser.parsePrimary(Parser.java:923)
com.google.escapevelocity.Parser.parseUnaryExpression(Parser.java:890)
com.google.escapevelocity.Parser.parseExpression(Parser.java:797)
com.google.escapevelocity.Parser.parseSet(Parser.java:401)
com.google.escapevelocity.Parser.parseDirective(Parser.java:328)
com.google.escapevelocity.Parser.parseNode(Parser.java:218)
com.google.escapevelocity.Parser.parseTokens(Parser.java:126)
com.google.escapevelocity.Parser.parse(Parser.java:118)
com.google.escapevelocity.Template.parseFrom(Template.java:112)
com.google.escapevelocity.Template.parseFrom(Template.java:94)
com.google.protobuf.contrib.autoprotomodel.prototype.BackingClassGenerator.loadTemplate(BackingClassGenerator.java:97)
...
Why doesn't this work?
Is this a bug in the Escape Velocity project?
It seems that Escape Velocity doesn't support to set Java array (from docs):
Unlike Velocity, EscapeVelocity does not allow $indexme to be a Java array.
Why not using Velocity? EscapeVelocity uses old version 1.7 instead of new 2.0, and you have extra tools for velocity
EscapeVelocity is a templating engine that can be used from Java. It is a reimplementation of a subset of functionality from Apache Velocity.
This is not an official Google product.
is there a way that I could list all files from my sandbox with their Member Rev. by using si.exe?
Beside this, are there any special rights required in order to do this?
Thanks!
You could do this using:
si viewsandbox
Enter sandbox name: d:\DELETEME\SB1\project.pj
Do you want to recurse into the subproject d:\DELETEME\SB1\watch\project.pj? [ynYN](n): y
d:\DELETEME\SB1\watch\project.pj (d1) variant-subsandbox
d:\DELETEME\SB1\watch\AISubsystem.txt archived 1.3
d:\DELETEME\SB1\watch\ConversionTool.asm archived 1.1
d:\DELETEME\SB1\watch\ExceptionHandler.asm archived 1.1
d:\DELETEME\SB1\watch\StructureImplementation.java archived 1.6
d:\DELETEME\SB1\watch\dataStructure.txt archived 1.2
d:\DELETEME\SB1\watch\findSmallestInput.asm archived 1.1
I am getting GeoJson from Postgres.
When I transform to 3857 (from 4326) in postgres I get this for an extent:
[-1150534050240.958, NaN, -1150492727057.5322, NaN]
When I remove transformation from postgres and let OL3 do the transform I get this:
[-10335423.222313922, 3843466.0274247285, -10335052.00984128, 3843760.3957394864]
In 3.0 and older we always transformed in postgres before serving to OL3.
Is there an issue with 3.5 getting already transformed geojson? Anyone know of known issues with postgres/postgis and OL3 3.5?
Thank You!
*** snippet
vectorLayer = new ol.layer.Vector({
source: new ol.source.Vector({
format: new ol.format.GeoJSON(),
url:'myProxy'
}),
p.s. from:
https://groups.google.com/forum/#!topic/ol3-dev/R4zdhY3qex8
I want to read from a file that looks like the below, and put in a list that looks like [[Sten Saker][20.0] [1.0]], I don't know how to do it.
Sten S ̈aker 20.0 1.0
Olle Ojamn 19.0 2.0
Charlie Chans 18.0 3.0