Replace integrated ascii reader
Reported by aSydiK | October 4th, 2010 @ 05:48 PM | in 0.5 (closed)
We need to be able to read more files than just ascii type files. We should function off the .ASC reader and only use it if the filetype is an ascii type.
Some file types we can support in addition to .ASC and
.PSI
.DAE (COLLADA)
.OBJ (Alias-Wavefront)
.ARA (Aranz)
.XML
Comments and changes to this ticket
-
Paul October 5th, 2010 @ 03:28 PM
Regarding the asc file type, I have been using version 0.3 with an asc file type that contains only 3 fields (X,Y,Z). When I try and upgrade to 0.4 or 0.4.5 this file type no longer works. Looking briefly at the source, a change seems to have happened to the 'loadFile' function thats not working with 'getDataLayout' as in version 0.3 that correctly determined my file contents up to now.
-
Andor Salga October 5th, 2010 @ 04:04 PM
- State changed from new to assigned
Paul,
I removed that function because I wanted to polish it before re-adding. It shouldn't be an issue adding it back. Would it be possible for you to attach the file so I can run it as a test case? -
Paul October 5th, 2010 @ 04:43 PM
WinRAR archive attached with sample file. This is a very small sample of what would normally be much bigger files that we generate, if any of these bigger files would help I send them as well. Also, maybe I could get your advice re our work flow for this type of sample file which is dynamically generated from an openlayers interface querying a very large PostGIS DB of LiDAR point cloud surveys. At the minute this involves the following steps:
- prepare a spatial query in an openlayers interface - perform the query on the LiDAR DB - write the results to an asc file on the web server - pass the file name back to the client - this then starts to load the asc file into XBPointstream
I would prefer to pass the LiDAR points straight back to the client into XBPointstream and render them and skip the file creation step completely. Thus, if you have any advice in this direction it would be much appreciated.
Thanks in advance
Paul -
Andor Salga October 9th, 2010 @ 02:14 AM
- Milestone set to 0.5
- Milestone order changed from 15 to 0
Great, thanks Paul!
1) Yes, we'd love to get a large sample of data you work with.
2) Just wondering, why do you only provide verts in this file? Doesn't it just
render black? Something we'd like to add to the library is automagic normal insertion
in cases when only vertices are passed in. Would this be useful to you?
3) What other kind of data can GIS data have? What is typical, what isn't?
4) When you query the database can you select which part of the point
cloud you want, or do you always have to get the entire file? I don't
see why we shouldn't support raw streams of data. When we add this, you
could probably just feed in the result from the query right into the library.
5) If you're writing the .ASC file to the server, you may want to gzip it
on the fly to reduce download times. Someone helped us with this a while ago
which helped quite a bit:
http://asalga.wordpress.com/2010/06/15/xb-pointstream-release-0-1/#...
Let us know if you want help with this.Thanks again for the reply!
-
Andor Salga October 10th, 2010 @ 04:09 PM
So I'm going to suggest some things for this ticket. Let me know what you guys think:
I suggest we create an interface and force all parsers to 'implement' it.
Not strictly a requirement, but each parser should have:
- Author's name - e-mail contact - date - version - limitations - file types supported - any other relevant information
Here is the proposed interface:
// Get the version of this particular parser // Returns a string // Example: "0.1" , "1.5.1" getVersion()
// Start downloading and parsing the point cloud // Path can be any valid URI // // There may be cases where the data is already in // arrays. So we may need to add a method to the // renderer to just accept these arrays. Ideas? load(String path)
// Get the progress of how much was data was downloaded and parsed so far. // // Returns normalized floating point value // [0..1] value (0=0% , 1=100%) OR // -1 if unknown getProgress()
// Renderer may want to know the total count of points in the file // // However, the parser may not beforehand (such as with ASC files) // In this case it will return // return -1 if unknown // Otherwise it will return the number of points in the stream getTotalPoints()
// Get the status of the file being downloaded // Return codes could be: // 0 - File wasn't found // 1 - Found file, starting // 2 - Streaming // 3 - Complete getStatus()
// Provide the parser with a callback function to call every time a chunk // of data is downloaded and parsed. // When the function is called, it will be passed in an object containing arrays // { // [0.011, 50.32, 12.143, ...], // [5.240, 10.54, 30.837, ...], // [84.83, 1.873, 95.274, ...] // } // These arrays represent ambiguous data streams, the renderer needs to // provide a meta data callback function to get what these stream represent. // Is there a better way to handle this? // // Also, Color data must be normalized [0..1] setStreamCallback()
// This function is needed since ASC files are // supported and in some cases, it is ambiguous // what the data stream contains until later in the file. // for example, it cannot be known if the second 'column' of data // contains colors or normals // 50.303134 40.2343 30.566 0 0 1 // blue or +z? // 50.303136 40.2364 30.578 0 0 1 // blue or +z? // // This function can be called to ask what the // streams represent. // If the parse doesn't know yet, it returns -1 // Otherwise it returns an array // // return strings or constants? // [XBPS_VERTICES, XBPS_COLORS, XBPS_NORMALS, XBPS_BINORMALS] // ["vertices", "colors", "normals", "bi-normals"] // // I can see us getting into issues with strings, but adding // constants will force us to have to update the renderer every time a // parser wants to provide a new kind of data. Ideas? // // Or should this be a regular function instead of a callback? // setMetaDataCallback()
// Other concerns // anything for GIS? // // If we don't have any XBPS dependencies, it will ensure // both the parser and the renderer are loosely coupled. // This means contants are out. Ideas? Suggestions? // // Should the parser retain the data? // We can probably be safe letting the renderer // keep the data and discard the data in the parser. // // Should the parser return typed arrays regular arrays or // either? Should it just be the renderer's responsibility? -
Andor Salga October 10th, 2010 @ 04:41 PM
- Tag set to parser
-
Paul October 11th, 2010 @ 10:48 AM
Hi Andor,
Replies to questions below and thanks again for your help.
-->1) Yes, we'd love to get a large sample of data you work with.
Contained within this rar link are two large sample files of the same area similar to the previous example -
https://left.nuim.ie/download/lidar.rar/plewis/Z0wUpShXn4Su7nzgoCRY...
-->2) Just wondering, why do you only provide verts in this file? Doesn't it just render black? Something we'd like to add to the library is automagic normal insertion
in cases when only vertices are passed in. Would this be useful to you?
Only providing verts has just been my initial testing procedure. I have ps.background([0,0,0,0.5]) and
ps.pointSize(0.5) so I get a very good rendering of the point cloud, in general; the only visual problem is where I have a very dense concentration of points where multiple scans cross.
Automatic normal insertion in cases like this would be very interesting, however, we do have a number of other point properties available (example listed below) that we will start testing soon.
Pulse Width: 2
Range: 43.223
Amplitude:1082
Number of targets:1
Target number:1
Reflectance:-1230
So it would be useful to include these values but also have normals automatically generated based on parameter controls such as elevation etc.
-->3) What other kind of data can GIS data have? What is typical, what isn't?
As in answer 2 we have a limited number of LiDAR return parameters. This LiDAR is collected from a mobile 300khrz scanner but in future development/testing/research we will be trying to incorporate progressive scan (RGB), multispectral and thermal imagery as possible draping options and/or assigning points with appropriate values from these for visualisation. In the mean time we are hoping to use the the few parameters we currently have to test for feature identification/segmentation approaches in a visualisation interface, so being able to re-render on the fly based on a number of different point parameters would be very handy.
-->4) When you query the database can you select which part of the point cloud you want, or do you always have to get the entire file? I don't
see why we shouldn't support raw streams of data. When we add this, you
could probably just feed in the result from the query right into the library.
Our interface allows us to query the database spatially/geographically for any part of any point cloud that is available in the DB. Our current test database has 6 seperate mobile scans with over 400million points from which we can generate any point data set in any of the X, Y and/or Z planes. This is only a test as a very-much larger stortage and access framework is soon going to be developed on this testing work; we will probably have to set query control parameters as massive LiDAR query downloads would be prohibitive but this has yet to be ironed out.
On this issue I have a couple of problems with displaying the dynamic LiDAR data sets. I'm using XBP on a Windows 7 64bit OS in a Chromium 6.0.476.0 (53456) browser.
Firstly, the canvas element has to have a style width and height set in html which I can dynamically change with javascript (jquery), however if I leave out the style setting in the html the canvas element only renders as a very small (approx. 100 X 50 pixel) rectangle and no LiDAR loads into it.
Secondly, I regularly get a display driver crash with nvlddmkm.sys and have to restart Chromium to get webgl back up and running. I haven't fully worked out when this happens but I do have my XBP canvas element in a jquery.ui accordian widget which I use to allow a user view the LiDAR as part of many other viewable elements on the interface. SO I'm not sure if the crash happens because the widget can toggle between showing and hiding the canvas or if its the dynamic change to the interface when I change from one LiDAR asc file to a different one. Could there be a procedure to re-initialise XBP that I'm not doing properly every time I run a query to view a different data set.
-->5) If you're writing the .ASC file to the server, you may want to gzip it on the fly to reduce download times. Someone helped us with this a while ago
which helped quite a bit:
http://asalga.wordpress.com/2010/06/15/xb-pointstream-release-0-1/#...
Let us know if you want help with this.
Thanks for this tip, I'll have a look at it over the next week or so.
Re your follow-on post, the getTotalPoints() in our implementation can be determined before hand, even in the case of asc files as I can count the number of points from the DB query.
With the rest of your points I'll have to spend a bit more time looking at the source as I literally only set up what I have running as a testing implmentation. Thus, if I can be of any help in testing or adding to this please let me know.Cheers.
-
Andor Salga October 21st, 2010 @ 06:26 PM
Paul, just an FYI. I finished adding the getDataLayout() function
back into the library. Here's the ticket link:http://cdot.lighthouseapp.com/projects/52886/tickets/66
I'll stage it once Mike finishes reviewing it.
-
Andor Salga October 22nd, 2010 @ 01:36 AM
I updated the main ticket description to include
some types we can support in the future. -
Andor Salga November 23rd, 2010 @ 09:10 PM
Okay, after working on writing a parser for a while I came up
with the following interface:http://zenit.senecac.on.ca/wiki/index.php/XB_PointStream#Parser_Int...
-
Paul February 9th, 2011 @ 05:51 PM
Hi Andor,
Will be back in the loop for a while from next week if there is anything I can do to help progress the 0.5 release. The interface looks good and I've no particular suggestions at the moment, however I've not looked at any of this (or my) code for a few months now.
FYI, in the presentation given at FSOSS2010 (http://www.fosslc.org/drupal/content/pointstream-rendering-mass-poi...) the 1 million point Lion Bust example is mentioned as being the larger end of your test data sets. I've been integrating and still getting reasonable XB performance with 2 to 3 million point cloud files. These are dynamically generated in a fairly in depth Openlayers, PostGIS and XB PB mash-up. The performance hit has been moving the large data files across the network for the DB servers but I'll hopefully be implementing your zip suggestions soon to improve it.
-
Andor Salga February 9th, 2011 @ 06:26 PM
- State changed from assigned to checked-in
Hey Paul,
Wow, I'm surprised. But perf issues might be because I have such an old Macbook. Browser performance seems to be all over the place as well.I updated the parser interface here: http://zenit.senecac.on.ca/wiki/index.php/XB_PointStream#Parser_Int...
We'll have some examples up there sometime soon.
Please Sign in or create a free account to add a new ticket.
With your very own profile, you can contribute to projects, track your activity, watch tickets, receive and update tickets through your email and much more.
Create your profile
Help contribute to this project by taking a few moments to create your personal profile. Create your profile ยป
<b>XB PointStream</b> (working name)
A cross-browser JavaScript tool which will emulate Arius3D's PointStream viewer. It will be able to quickly render a large amount of point cloud data to the <canvas> tag using WebGL.<br />
Current release <a href="http://scotland.proximity.on.ca/asalga/releases/0.8/xbps-min-0.8.zip">XBPS 0.8</a>
People watching this ticket
Attachments
Tags
Referenced by
- 44 Create plugin architecture for filetype I think this is a duplicate of #65.
- 62 Support many point clouds in one canvas I have this working in my #65 branch.
- 68 Reduce sizes of .ASC test files placed all the point clouds in one directory in #65.
- 56 Make release script I started writing one, it's in the #65 branch. I'll fix i...