Works by itself, but not in a loop - loops

I am using pyteaser to get information from a listing of websites. Since there could be hundreds of sites, I am trying to put it in a loop. When I run this code by itself:
summaries = SummarizeUrl(df['url'].values[1])
print (summaries)
It gives the following output, working fine:
[u'Bookings Institute researcher Paul C. Light published a study about failed government projects and their causes.', u'In 2011, U.K. government officials scraped a massive 9-year, $16 billion project to create a unified electronic health records system for British citizens.', u'Changing requirements, insufficient testing, and the monolithic nature of the project contributed to this failed government project for the failure.', u'Projected to cost $68 million, the projects costs skyrocketed to $700 million before being abandoned.', u'Here are a few examples of failed government projects, with estimated costs and causes:\n\nThe FBI system was designed to modernize tech systems and enable easier access across diverse FBI information assets.']
When I put it in a loop as follows:
i=0
for i in list(df):
summaries = SummarizeUrl(df['url'].values[i])
str1 = ''.join(summaries)#convert to string
print (str1)
I get the following error:
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
I am trying to increment i based on the value in the dataframe. The dataframe looks like this:
dataframe
It works when I do it manually.

You're not incrementing i with this. In your code i is not an integer. It is an object consisting of one entry of what ever df is.
You could try to print your i like:
for i in list(df):
print(type(i))
print(i)
to see what it is. Then you should correct you access of the url, which could be i['url']

Related

NDB Queries Exceeding GAE Soft Private Memory Limit

I currently have a an application running in the Google App Engine Standard Environment, which, among other things, contains a large database of weather data and a frontend endpoint that generates graph of this data. The database lives in Google Cloud Datastore, and the Python Flask application accesses it via the NDB library.
My issue is as follows: when I try to generate graphs for WeatherData spanning more than about a week (the data is stored for every 5 minutes), my application exceeds GAE's soft private memory limit and crashes. However, stored in each of my WeatherData entities are the relevant fields that I want to graph, in addition to a very large json string containing forecast data that I do not need for this graphing application. So, the part of the WeatherData entities that is causing my application to exceed the soft private memory limit is not even needed in this application.
My question is thus as follows: is there any way to query only certain properties in the entity, such as can be done for specific columns in a SQL-style query? Again, I don't need the entire forecast json string for graphing, only a few other fields stored in the entity. The other approach I tried to run was to only fetch a couple of entities out at a time and split the query into multiple API calls, but it ended up taking so long that the page would time out and I couldn't get it to work properly.
Below is my code for how it is currently implemented and breaking. Any input is much appreciated:
wDataCsv = 'Time,' + ','.join(wData.keys())
qry = WeatherData.time_ordered_query(ndb.Key('Location', loc),start=start_date,end=end_date)
for acct in qry.fetch():
d = [acct.time.strftime(date_string)]
for attr in wData.keys():
d.append(str(acct.dict_access(attr)))
wData[attr].append([acct.time.strftime(date_string),acct.dict_access(attr)])
wDataCsv += '\\n' + ','.join(d)
# Children Entity - log of a weather at parent location
class WeatherData(ndb.Model):
# model for data to save
...
# Function for querying data below a given ancestor between two optional
# times
#classmethod
def time_ordered_query(cls, ancestor_key, start=None, end=None):
return cls.query(cls.time>=start, cls.time<=end,ancestor=ancestor_key).order(-cls.time)
EDIT: I tried the iterative page fetching strategy described in the link from the answer below. My code was updated to the following:
wDataCsv = 'Time,' + ','.join(wData.keys())
qry = WeatherData.time_ordered_query(ndb.Key('Location', loc),start=start_date,end=end_date)
cursor = None
while True:
gc.collect()
fetched, next_cursor, more = qry.fetch_page(FETCHNUM, start_cursor=cursor)
if fetched:
for acct in fetched:
d = [acct.time.strftime(date_string)]
for attr in wData.keys():
d.append(str(acct.dict_access(attr)))
wData[attr].append([acct.time.strftime(date_string),acct.dict_access(attr)])
wDataCsv += '\\n' + ','.join(d)
if more and next_cursor:
cursor = next_cursor
else:
break
where FETCHNUM=500. In this case, I am still exceeding the soft private memory limit for queries of the same length as before, and the query takes much, much longer to run. I suspect the problem may be with Python's garbage collector not deleting the already used information that is re-referenced, but even when I include gc.collect() I see no improvement there.
EDIT:
Following the advice below, I fixed the problem using Projection Queries. Rather than have a separate projection for each custom query, I simply ran the same projection each time: namely querying all properties of the entity excluding the JSON string. While this is not ideal as it still pulls gratuitous information from the database each time, generating individual queries of each specific query is not scalable due to the exponential growth of necessary indices. For this application, as each additional property is negligible additional memory (aside form that json string), it works!
You can use projection queries to fetch only the properties of interest from each entity. Watch out for the limitations, though. And this still can't scale indefinitely.
You can split your queries across multiple requests (more scalable), but use bigger chunks, not just a couple (you can fetch 500 at a time) and cursors. Check out examples in How to delete all the entries from google datastore?
You can bump your instance class to one with more memory (if not done already).
You can prepare intermediate results (also in the datastore) from the big entities ahead of time and use these intermediate pre-computed values in the final stage.
Finally you could try to create and store just portions of the graphs and just stitch them together in the end (only if it comes down to that, I'm not sure how exactly it would be done, I imagine it wouldn't be trivial).

How to make datastore keys mapreduce-friendly(-er)?

Edit: See my answer. Problem was in our code. MR works fine, it may have a status reporting problem, but at least the input readers work fine.
I ran an experiment several times now and I am now sure that mapreduce (or DatastoreInputReader) has odd behavior. I suspect this might have something to do with key ranges and splitting them, but that is just my guess.
Anyway, here's the setup we have:
we have an NDB model called "AdGroup", when creating new entities
of this model - we use the same id returned from AdWords (it's an
integer), but we use it as string: AdGroup(id=str(adgroupId))
we have 1,163,871 of these entities in our datastore (that's what
the "Datastore Admin" page tells us - I know it's not entirely
accurate number, but we don't create/delete adgroups very often, so
we can say for sure, that the number is 1.1 million or more).
mapreduce is started (from another pipeline) like this:
yield mapreduce_pipeline.MapreducePipeline(
job_name='AdGroup-process',
mapper_spec='process.adgroup_mapper',
reducer_spec='process.adgroup_reducer',
input_reader_spec='mapreduce.input_readers.DatastoreInputReader',
mapper_params={
'entity_kind': 'model.AdGroup',
'shard_count': 120,
'processing_rate': 500,
'batch_size': 20,
},
)
So, I've tried to run this mapreduce several times today without changing anything in the code and without making changes to the datastore. Every time I ran it, mapper-calls counter had a different value ranging from 450,000 to 550,000.
Correct me if I'm wrong, but considering that I use the very basic DatastoreInputReader - mapper-calls should be equal to the number of entities. So it should be 1.1 million or more.
Note: the reason why I noticed this issue in the first place is because our marketing guys started complaining that "it's been 4 days after we added new adgroups and they still don't show up in your app!".
Right now, I can think of only one workaround - write all keys of all adgroups into a blobstore file (one per line) and then use BlobstoreLineInputReader. The writing to blob part would have to be written in a way that does not utilize DatastoreInputReader, of course. Should I go with this for now, or can you suggest something better?
Note: I have also tried using DatastoreKeyInputReader with the same code - the results were similar - mapper-calls were between 450,000 and 550,000.
So, finally questions. Is it important how you generate ids for your entities? Is it better to use int ids instead of str ids? In general, what can I do to make it easier for mapreduce to find all of my entities mapping them?
PS: I'm still in the process of experimenting with this, I might add more details later.
After further investigation we have found that the error was actually in our code. So, mapreduce actually works as expected (mapper is called for every single datastore entity).
Our code was calling some google services functions that were sometimes failing (the wonderful cryptic ApplicationError messages). Due to these failures, MR tasks were being retried. However, we have set a limit on taskqueue retries. MR did not detect nor report this in any way - MR was still showing "success" in the status page for all shards. That is why we thought that everything is fine with our code and that there is something wrong with the input reader.

How to get the bounding coordinates for a US postal(zip) code?

Is there a service/API that will take a postal/zip code and return the bounding(perimeter) coordinates so I can build a Geometry object in a MS SQL database?
By bounding coordinates, I mean I would like to retrieve a list of GPS coordinates that construct a polygon that defines the US zip code.
An elaboration of my comment, that ZIP codes are not polygons....
We often think of ZIP codes as areas (polygons) because we say, "Oh, I live in this ZIP code..." which gives the impression of a containing region, and maybe the fact that ZIP stands for "Zone Improvement Plan" helps the false association with polygons.
In actuality, ZIP codes are lines which represent, in a sense, mail carrier routes. Geometrically, lines do not have area. Just as lines are strings of points along a coordinate plane, ZIP code lines are strings of delivery points in the abstract space of USPS-designated addresses.
They are not correlated to geographical coordinates. What you will find, though, is that they appear to be geographically oriented because it would be inefficient for carriers to have a route completely irrelevant of distance and location.
What is this "abstract space of USPS-designated addresses"? That's how I am describing the large and mysterious database of deliverable locations maintained by the US Postal Service. Addresses are not allotted based on geography, but on the routes that carriers travel which usually relates to streets and travelability.
Some 5-digit ZIP codes are only a single building, or a complex of buildings, or even a single floor of a building (yes, multiple zip codes can be at a single coordinate because their delivery points are layered vertically). Some of these -- among others -- are "unique" ZIPs. Companies and universities frequently get their own ZIP codes for marketing or organizational purposes. For instance, the ZIP code "12345" belongs to General Electric up in Schenectady, NY. (Edit: In a previous version of Google Maps, when you follow that link, you'd notice that the placement marker was hovering, because it points to a ZIP code, which is not a coordinate. While most US ZIP codes used to show a region on Google Maps, these types cannot because the USPS does not "own" them, so to speak, and they have no area.)
Just for fun, let's try verifying an address in a unique ZIP code. Head over to SmartyStreets and punch in a bogus address in 12345, like:
Street: 999 Sdf sdf
ZIP Code: 12345
When you try to verify that, notice that... it's VALID! Why? The USPS will deliver a piece to the receptacle for that unique ZIP code, but at that point, it's up to GE to distribute it. Pretty much anything internal to the ZIP code is irrelevant to the USPS, including the street address (technically "delivery line 1"). Many universities function in a similar manner. Here's more information regarding that.
Now, try the same bogus address, but without a ZIP code, and instead do the city/state:
Street: 999 Sdf sdf
City: Schenectady
State: NY
It doesn't validate. This is because even though Schenectady contains 12345, where the address is "valid," it geometrically intersects with the "real" ZIP codes for Schenectady.
Take another instance: military. Certain naval ships have their own ZIP codes. Military addresses are an entirely different class of addresses using the same namespace. Ships move. Geographical coordinates don't.
ZIP precision is another fun one. 5-digit ZIP codes are the least "precise" (though the term "specific" might be more meaningful here, since ZIP codes don't pinpoint anything). 7- and 9-digit ZIP codes are the most specific, often down to block or neighborhood-level in urban areas. But since each ZIP code is a different size, it's really hard to tell what actual distances you're talking.
A 9-digit ZIP code might be portioned to a floor of a building, so there you have overlapping ZIP codes for potentially hundreds of addresses.
Bottom line: ZIP codes don't, contrary to popular belief, provide geographical or boundary data. They vary widely and are actually quite un-helpful unless you're delivering mail or packages... but the USPS' job was to design efficient carrier routes, not partition the population into coordinate regions so much.
That's more the job of the census bureau. They've compiled a list of cartographic boundaries since ZIP codes are "convenient" to work with. To do this, they sectioned bunches of addresses into census blocks. Then, they aggregated USPS ZIP code data to find the relation between their census blocks (which has some rough coordinate data) and the ZIP codes. Thus, we have approximations of what it would look like to plot a line as a polygon. (Apparently, they converted a 1D line into a 2D polygon by transforming a 2D polygon based on its contents to fit linear data -- for each non-unique, regular ZIP code.)
From their website (link above):
A ZIP Code tabulation area (ZCTA) is a statistical geographic entity
that approximates the delivery area for a U.S. Postal Service
five-digit or three-digit ZIP Code. ZCTAs are aggregations of census
blocks that have the same predominant ZIP Code associated with the
addresses in the U.S. Census Bureau's Master Address File (MAF).
Three-digit ZCTA codes are applied to large contiguous areas for which
the U.S. Census Bureau does not have five-digit ZIP Code information
in its MAF. ZCTAs do not precisely depict ZIP Code delivery areas, and
do not include all ZIP Codes used for mail delivery. The U.S. Census
Bureau has established ZCTAs as a new geographic entity similar to,
but replacing, data tabulations for ZIP Codes undertaken in
conjunction with the 1990 and earlier censuses.
The USCB's dataset is incomplete, and at times inaccurate. Google still has holes in their data, too (the 12345 is a somewhat good example) -- but Google will patch it eventually by going over each address and ZIP code by hand. They do this already, but haven't made all their map data perfect quite yet. Naturally, access to this data is limited to API terms, and it's very expensive to raise these.
Phew. I'm beat. I hope that helps clarify things. Disclaimer: I used to be a developer at SmartyStreets. More information on geocoding with address data.
Even more information about ZIP codes.
What you are asking for is a service to provide "Free Zip code Geocoding". There are a few out there with varying quality. You're going to have a bad time coding something like this yourself because of a few reasons:
Zip codes can be assigned to a single building or to a post office.
Zip codes are NOT considered a polygonal area. Projecting Zip codes to a polygonal area will require you to make an educated guess where the boundary is between one zipcode and the next.
ZIP code address data specifies only a center location for the ZIP code. Zip code data provides the general vicinity of an address. Mailing addresses that exist between one zipcode and another can be in dispute on which zipcode it actually is in.
A mailing address may be physically closer to zipcode 11111, yet its official zip code is a more distant zip code point 11112.
Google Maps has a geocoding API:
The google maps API is client-side javascript. You can directly query the geocoding system from php using an http request. However, google maps only gives you what the United States Postal Service gives them. A point representing the center of the zipcode.
https://developers.google.com/maps/#Geocoding_Examples
map city/zipcode polygons using google maps
Thoughts on projecting a zipcode to its lat/long bounding box
There are approximately 43,000 ZIP Codes in the United States. This number fluctuates from month to month, depending on the number of changes made. The zipcodes used by the USPS are not represented as polygons and do not have hard and fast boundaries.
The USPS (United States Postal Service) is the authority that defines each zipcode lat/long. Any software which resolves a zipcode to a geographical location would be in need of weekly updates. One company called alignstar provides demographics and GIS data of zipcodes ( http://www.alignstar.com/data.html ).
Given a physical (mailing) address, find the geographical coordinates in order to display that location on a map.
If you want to reliably project what shape the zipcode is in, you are going to need to brute force it and ask: "give me every street address by zipcode", then paint boxes around those mis-shapen blobs. Then you can get a general feel for what geographical areas the zipcodes cover.
http://vterrain.org/Culture/geocoding.html
If you were to throw millions of mailing address points into an algorithm resolving every one to a lat/long, you might be able to build a rudimentary blob bounding box of that zipcode. You would have to re-run this algorithm and it would theoretically heal itself whenever the zipcode numbers move.
Other ideas
http://shop.delorme.com/OA_HTML/DELibeCCtpSctDspRte.jsp?section=10075
http://www.zip-codes.com/zip-code-map-boundary-data.asp
step 1:download cb_2018_us_zcta510_500k.zip
https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html
if you want to keep them in mysql
step 2: in your mysql create a db of named :spatialdata
run this command
ogr2ogr -f "MySQL" MYSQL:"spatialdata,host=localhost,user=root" -nln "map" -a_srs "EPSG:4683" cb_2018_us_zcta510_500k.shp -overwrite -addfields -fieldTypeToString All -lco ENGINE=MyISAM
i uploaded the file on github(https://github.com/sahilkashyap64/USA-zipcode-boundary/blob/master/USAspatialdata.zip)
In the your "spatialdata db" there will be 2 table named map & geometry_columns .
In 'map' there will be a column named "shape".
shape column is of type "geometry" and it contains polygon/multipolygon files
In 'geometry_columns' there will will be srid defined
how to check if point falls in the polygon
SELECT * FROM map WHERE ST_Contains( map.SHAPE, ST_GeomFromText( 'POINT(63.39550 -148.89730 )', 4683 ) )
and want to show boundary on a map
select zcta5ce10 as zipcode, ST_AsGeoJSON(SHAPE) sh from map where ST_Contains( map.SHAPE, ST_GeomFromText( 'POINT(34.1116 -85.6092 )', 4683 ) )
"ST_AsGeoJSON" this returns spatial data as geojson.
Use http://geojson.tools/
"HERE maps" to check the shape of geojson
if you want to generate topojson
mapshaper converts shapefile to topojson (no need to convert it to kml file)
npx -p mapshaper mapshaper-xl cb_2018_us_zcta510_500k.shp snap -simplify 0.1% -filter-fields ZCTA5CE10 -rename-fields zip=ZCTA5CE10 -o format=topojson cb_2018_us_zcta510_500k.json
If you want to convert shapefile to kml
`ogr2ogr -f KML tl_2019_us_zcta510.kml -mapFieldType Integer64=Real tl_2019_us_zcta510.shp
I have used mapbox gl to display 2 zipcodes
example: https://sahilkashyap64.github.io/USA-zipcode-boundary/
code :https://github.com/sahilkashyap64/USA-zipcode-boundary
SQL Server Solution
Download the Shape files from the US Census:
https://catalog.data.gov/dataset/2019-cartographic-boundary-shapefile-2010-zip-code-tabulation-areas-for-united-states-1-500000
I then found this repository to import the shape file to SQL Server, it was very fast and required no additional coding: https://github.com/xfischer/Shape2SqlServer
Then I could write my own script to find out which zip codes are in a polygon I created:
DECLARE #polygon GEOMETRY;
DECLARE #isValid bit = 0;
DECLARE #p nvarchar(2048) = 'POLYGON((-120.1547 39.2472,-120.3758 39.1950,-120.2124 38.7734,-119.6590 38.8162,-119.6342 39.3672,-120.1836 39.2525,-120.1547 39.2472))'
SET #polygon = GEOMETRY::STPolyFromText(#p,4326)
SET #isValid = #polygon.STIsValid()
IF (#isValid = 1)
SET #polygon = #polygon.MakeValid();
SET #isValid = #polygon.STIsValid()
IF (#isValid = 1)
BEGIN
SELECT * FROM cb_2019_us_zcta510_500k
WHERE geom.STIntersects(#polygon) = 1
END
ELSE
SELECT 'Polygon not valid'
I think this is what you need it uses US Census as repository: US Zipcode
Boundaries API: https://www.boundaries-io.com
Above API shows US Boundaries(GeoJson) by zipcode,city, and state. you should use the API programatically to handle large results.
Disclaimer,I work here
I think the world geoJson link and the google map geocode api can help you.
example: you can use the geocode api to code the zip,you will get the city,state,country,then,you search from the world and us geoJson get the boundry,I have an example of US State boundry,like dsdlink

Multi-location entity query solution with geographic distance calculation

in my project we have an entity called Trip. This trip has two points: start and finish. Start and finish are geo coordinates with some added properties like address atc.
what i need is to query for all Trips that satisifies search criteria for both start and finish.
smth like
select from trips where start near 16,16 and finish near 18,20 where type = type
So my question is: which database can offer such functionality?
what i have tried
i have explored mongodb which has support for geo indexes but does not support this use case. current solution stores the points as separate documents which have a reference to a Trip. we run two separate quesries for starts and finishes, then extract ids of their associated trips and then select trip ids that are found both in starts and finishes and finally return a collection of trips.
on a small sample it works fine but with a larger collection it gets slow and it's like scratching my left ear with my right hand.
so i am looking for a better solution.
i know about neo4j and its spatial plugin but i couldn't even make it work on windows. would it support our use case?
or are there any better solutions? preferably with a object mapper written in php.
like edze already said Postgres (PostGIS) or SQLite(SpatiaLite) is what your looking for
SELECT
*
FROM
trips
WHERE
ST_Distance(ST_StartPoint(way), ST_GeomFromText('POINT(16 16)',4326) < 5
AND ST_Distance(ST_EndPoint(way), ST_GeomFromText('POINT(18 20)',4326) < 5
AND type = 'type'

SSIS/VB.NET Equivalent of SQL IN (anonymous array.Contains())

I've got some SQL which performs complex logic on combinations of GL account numbers and cost centers like this:
WHEN (#IntGLAcct In (
882001, 882025, 83000154, 83000155, 83000120, 83000130,
83000140, 83000157, 83000010, 83000159, 83000160, 83000161,
83000162, 83000011, 83000166, 83000168, 83000169, 82504000,
82504003, 82504005, 82504008, 82504029, 82530003, 82530004,
83000000, 83000100, 83000101, 83000102, 83000103, 83000104,
83000105, 83000106, 83000107, 83000108, 83000109, 83000110,
83000111, 83000112, 83000113, 83100005, 83100010, 83100015,
82518001, 82552004, 884424, 82550072, 82552000, 82552001,
82552002, 82552003, 82552005, 82552012, 82552015, 884433,
884450, 884501, 82504025, 82508010, 82508011, 82508012,
83016003, 82552014, 81000021, 80002222, 82506001, 82506005,
82532001, 82550000, 82500009, 82532000))
Overall, the whole thing is poorly performing in a UDF, especially when it's all nested and the order of the steps is important etc. I can't make it table-driven just yet, because the business logic is so terribly convoluted.
So I'm doing a little exploratory work in moving it into SSIS to see about doing it in a little bit of a different way. Inside my script task, however, I've got to use VB.NET, so I'm looking for an alternative to this:
Select Case IntGLAcct = 882001 OR IntGLAcct = 882025 OR ...
Which is obviously a lot more verbose, and would make it terribly hard to port the process.
Even something like ({90605, 90607, 90610} AS List(Of Integer)).Contains(IntGLAcct) would be easier to port, but I can't get the initializer to give me an anonymous array like that. And there are so many of these little collections, I'm not sure I can create them all in advance.
It really all NEEDS to be in one place. The business changes this logic regularly. My strategy was to use the udf to mirror their old "include" file, but performance has been poor. Now each of the functions takes just 2 or three parameters. It turns out that in a dark corner of the existing system they actually build a multi-million row table of all these results - even though the pre-calced table is not used much.
So my new experiment is to (since I'm still building the massive cross join table to reconcile that part of the process) go ahead and use the table instead of the code, but go ahead and populate this table during an SSIS phase instead of calling the udf 12 million times - because my udf version just basically stopped working within a reasonable time frame and the DBAs are not of much help right now. Yet, I know that SSIS can process these rows pretty efficiently - because each month I bring in the known good results dozens of multi-million row tables from the legacy system in minutes AND run queries to reconcile that there are no differences with the new versions.
The SSIS code would theoretically become the keeper of the business logic, and the efficient table would be built from that (based on all known parameter combinations). Of course, if I can simplify the logic down to a real logic table, that would be the ultimate design - but that's not really foreseeable at this point.
Try this:
Array.IndexOf(New Integer() {90605, 90607, 90610}, IntGLAcct) >-1
What if you used a conditional split transform on your incoming data set and then used expressions or something similar (I'm not sure if your GL Accounts are fixed or if you're going to dynamically pass them in) to apply to the results? You can then take the resulting data from that and process as necessary.

Resources