Записки программиста, обо всем и ни о чем. Но, наверное, больше профессионального.


Оптимизация: 41 день превратился в 20 часов

Дэйв Боуман рассказывает, как он оптимизировал вычисление координат:

We are doing a project for a state DOT where we needed to get the location of photos taken along the roadway... And, they have these photos taken every year, so in total, we needed to locate roughly 24 million of these photos for this project.
Through some complex SQL machinations that I will skip here, we were able to get the route and mile post (measure) for each photo, into a set of tables in a SQL Server database. The tables also had empty columns for x,y in Web Mercator and Lat, Long, which we would populate by looking up these values on the Linear Reference System (LRS) feature class, based on the route and mile post. Simple!
A few years ago we wrote an ArcGIS Server REST Server Object Extension (SOE) using .NET and ArcObjects, which would return the x,y (in web mercator) from an LRS feature class, based on a specified route and mile post.
However, some initial testing showed that each of these lookups via the REST SOE were taking ~ 150ms. This a lightning fast when doing a single lookup (you can check out this page which uses another version the same function), but in a tight loop, it was looking like 41 days of processing time.
The first thing I did was decide to ditch the REST SOE. There is no question that there is a ton of overhead in running this sort of a process through an SOE.
Thus, I fired up Visual Studio, and created an ArcGIS Console Application using the templates provided with the ArcGIS desktop SDK. From there, I dumped in the ArcObjects code from the SOE, and started scaffolding up code around it. For those interested, the core code for this is in a gist at the end of the post…
My initial tests on a batch of 10,000 photos ran in a few minutes, with an average lookup time of 5ms! Problem solved! Woot!
Then I unleashed it on a test batch of 100,000 records. Blammo! About 12,000 records in, an OutOfMemoryException is thrown. Darn. And I had not even gotten to writing out the output text file!
The solution to this was to run the processing in batches. At the beginning of a batch, I’d open the feature class, run through the batch, and then close it.
By this time I’d re-run the same 10,000 and 100,000 record test datasets at least 50 times, and this seemed to be a waste of time. Why not use my test runs to process real data? In my refactoring, I also wanted to ensure that the process could fail at any time, and re-starting things would pick up where it left off – zero wasted cycles.
so I set two of these processes running on Friday evening and went to bed pretty pleased with myself. Saturday morning, I found that, with less than 1 million records processed, both processes crashed due to SQL time outs. At least I was glad I’d built in that fault tolerance so that the ~900,000 records which completed and were stored would not have to be re-run.
I extended the timeouts in Massive, and started them back up, but noticed that now, the average time for a route & mile post lookup was ~30ms. What?
I suddenly had an idea what the issue was. The variation in the look up time was likely because some routes features are longer than others. And longer routes have more vertices, which simply take more time to be read from disk. Not only that, in the code, I was searching for the route feature on every single iteration, so we would see this tiny variation in performance magnified.
What if, when I did that SELECT TOP query, I made sure that every record I got back was on the same route? That way, I would only have to search for the route feature once per batch, and the tight loop over the batch would be working with the same geometry, simply looking up the point along that feature at the specified measure value.
This took a little refactoring, and some SQL indexes, but it had an amazing effect. As I’m typing this, the process is now running rock solid at ~3ms per look up, which is roughly 1.2 million lookups per hour.

Знакомо, да. Особенно то раздражение, которое возникает, когда запускаешь процесс, длящийся сутки-другие, и уходишь. А когда возвращаешься, оказывается, что процесс отвалился через 10 минут после того как ты ушел.
Отладка, тщательность и устойчивость к сбоям — наше все.

original post http://vasnake.blogspot.com/2013/12/41-20.html

Комментариев нет:

Отправить комментарий

Архив блога


linux (241) python (191) citation (185) web-develop (170) gov.ru (156) video (123) бытовуха (111) sysadm (100) GIS (97) Zope(Plone) (88) Book (81) programming (81) бурчалки (80) грабли (77) development (73) Fun (72) windsurfing (72) Microsoft (64) hiload (62) opensource (58) internet provider (57) security (57) опыт (55) movie (52) Wisdom (51) ML (47) language (45) hardware (44) JS (41) curse (40) money (40) driving (39) DBMS (38) bigdata (38) ArcGIS (34) history (31) PDA (30) howto (30) holyday (29) Google (27) Oracle (27) virtbox (27) health (26) vacation (24) AI (23) Autodesk (23) SQL (23) Java (22) humor (22) knowledge (22) translate (20) CSS (19) cheatsheet (19) hack (19) tourism (18) Apache (16) Manager (15) web-browser (15) Никонов (15) happiness (14) music (14) todo (14) PHP (13) weapon (13) HTTP. Apache (12) SSH (12) course (12) frameworks (12) functional programming (12) hero (12) im (12) settings (12) HTML (11) SciTE (11) crypto (11) game (11) map (11) scala (10) HTTPD (9) ODF (9) купи/продай (9) benchmark (8) documentation (8) 3D (7) CS (7) DNS (7) NoSQL (7) Photo (7) cloud (7) django (7) gun (7) matroska (7) telephony (7) Microsoft Office (6) VCS (6) bluetooth (6) pidgin (6) proxy (6) Donald Knuth (5) ETL (5) NVIDIA (5) REST (5) bash (5) flash (5) keyboard (5) price (5) samba (5) CGI (4) LISP (4) RoR (4) cache (4) display (4) holywar (4) nginx (4) pistol (4) xml (4) Лебедев (4) IDE (3) IE8 (3) J2EE (3) NTFS (3) RDP (3) USA (3) mount (3) spark (3) Гоблин (3) кухня (3) урюк (3) AMQP (2) ERP (2) IE7 (2) NAS (2) Naudoc (2) PDF (2) address (2) air (2) british (2) coffee (2) font (2) ftp (2) holiday (2) messaging (2) notify (2) sharepoint (2) ssl/tls (2) stardict (2) tests (2) tunnel (2) udev (2) APT (1) CRUD (1) Canyonlands (1) Cyprus (1) DVDShrink (1) Jabber (1) K9Copy (1) Matlab (1) Palanga (1) Portugal (1) VBA (1) WD My Book (1) autoit (1) bike (1) cannabis (1) chat (1) concurrent (1) dbf (1) ext4 (1) idioten (1) krusader (1) license (1) mindmap (1) pneumatic weapon (1) quiz (1) regexp (1) robot (1) science (1) serialization (1) tie (1) vim (1) Науру (1) крысы (1) налоги (1) пианино (1)

Google+ Followers