Setting container location using JClouds Location API - rackspace

I am using JClouds to abstract over various cloud providers, including Rackspace.
I am using the BlobStore from JClouds to store files, their API suggests that I can create a container in a specific (provider dependent) Location using:
context.getBlobStore().createContainerInLocation(location, "containerName");
However, how am I supposed to get the location variable (of interface type Location)?
For example, RackSpace supports Dallas or Chicago as the Location of a container. So I would like to do something like this:
Location dallas = ....; // Get location that points to "US-IL"
context.getBlobStore().createContainerInLocation(dallas, "container");
The 'magic' string US-IL was taken from the source.
I tried using this:
context.getBlobStore().listAssignableLocations(); // Only contains a single default location
context.getBlobStore().listAssignableLocations()[0].getParent(); // Not sure what this refers to, scoped at PROVIDER level
Anyone that can shed some light on how I should be using this?
Related question: JClouds for Azure Blob (not applicable, because the answer is Azure specific. Which did not require the location...)

This is now possible in jclouds 1.8.0 and above.
RegionScopedBlobStoreContext blobStoreContext = ContextBuilder.newBuilder(PROVIDER)
.credentials(username, apiKey)
.buildView(RegionScopedBlobStoreContext.class);
BlobStore blobStore = blobStoreContext.getBlobStore(REGION);

Related

How do we get the document file url using the Watson Discovery Service?

I don't see a solution to this using the available api documentation.
It is also not available on the web console.
Is it possible to get the file url using the Watson Discovery Service?
If you need to store the original source/file URL, you can include it as a field within your documents in the Discovery service, then you will be able to query that field back out when needed.
I also struggled with this request but ultimately got it working using Python bindings into Watson Discovery. The online documentation and API reference is very poor; here's what I used to get it working:
(Assume you have a Watson Discovery service and have a created collection):
# Programmatic upload and retrieval of documents and metadata with Watson Discovery
from watson_developer_cloud import DiscoveryV1
import os
import json
discovery = DiscoveryV1(
version='2017-11-07',
iam_apikey='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
url='https://gateway-syd.watsonplatform.net/discovery/api'
)
environments = discovery.list_environments().get_result()
print(json.dumps(environments, indent=2))
This gives you your environment ID. Now append to your code:
collections = discovery.list_collections('{environment-id}').get_result()
print(json.dumps(collections, indent=2))
This will show you the collection ID for uploading documents into programmatically. You should have a document to upload (in my case, an MS Word document), and its accompanying URL from your own source document system. I'll use a trivial fictitious example.
NOTE: the documentation DOES NOT tell you to append , 'rb' to the end of the open statement, but it is required when uploading a Word document, as in my example below. Raw text / HTML documents can be uploaded without the 'rb' parameter.
url = {"source_url":"http://mysite/dis030.docx"}
with open(os.path.join(os.getcwd(), '{path to your document folder with trailing / }', 'dis030.docx'), 'rb') as fileinfo:
add_doc = discovery.add_document('{environment-id}', '{collections-id}', metadata=json.dumps(url), file=fileinfo).get_result()
print(json.dumps(add_doc, indent=2))
print(add_doc["document_id"])
Note the setting up of the metadata as a JSON dictionary, and then encoding it using json.dumps within the parameters. So far I've only wanted to store the original source URL but you could extend this with other parameters as your own use case requires.
This call to Discovery gives you the document ID.
You can now query the collection and extract the metadata using something like a Discovery query:
my_query = discovery.query('{environment-id}', '{collection-id}', natural_language_query="chlorine safety")
print(json.dumps(my_query.result["results"][0]["metadata"], indent=2))
Note - I'm extracting just the stored metadata here from within the overall returned results - if you instead just had:
print(my_query) you'll get the full response from Discovery ... but ... there's a lot to go through to identify just your own custom metadata.

Make a solr query from Geotools through geoserver

I come here because I am searching (like the title mentionned) to do a query from geotools (through geoserver) to get feature from a solr index.
To be more precise :
I saw on geoserver user manual that i can do query on solr like this in http :
http://localhost:8080/geoserver/wfs?service=WFS&version=1.1.0&request=GetFeature
&typeName=mySolrLayer
&format="xxx"
&viewparams=q:"mySolrQuery"
The important part on this URL is the viewparams that I want to use directly from geotools.
I have already test this case (this is a part of my code):
url = new URL(
"http://localhost:8080/geoserver/wfs?request=GetCapabilities&VERSION=1.1.0";
);
Map<String, String> param = new HashMap();
params.put(WFSDataStoreFactory.URL.key, url);
param.put("viewparams","q:myquery");
Hints hints = new Hints();
hints.put(Hints.VIRTUAL_TABLE_PARAMETERS, viewParams);
query.setHints(hints);
...
featureSource.getFeatures(query);
But here, it seems to doesn't work, the url send to geoserver is a normal "GET FEATURE" request without the viewparams parameter.
I tried this with geotools-12.2 ; geotools-13.2 and geotools-15-SNAPSHOT but I didn't succeed to pass the query, geoserver send me all the feature in my database and doesn't take "viewparams" as a param.
I need to do it like this because actually the query come from another program and I would easily communicate this query to another part of the project...
If someone can help me ?
There doesn't currently seem to be a way to do this in the GeoTool's WFSDatastore implementations as the GetFeature request is constructed from the URL provided by the getCapabilities document. This is as the standard requires but it may be worth making a feature enhancement request to allow clients to override this string (as QGIS does for example) which would let you specify the additional parameter in your base URL which would then be passed to the server as you need.
Unfortunately the WFS module lives in Unsupported land at present so unless you have resources to work on this issue yourself and can provide a PR to implement it there is not a great chance of it being implemented.

Why does the Google App Engine python Blob care about what is stored in it

I have been struggling with some very basic understanding of how google app engine store data
I have defined a class defining a client profile as such :
class ClientProfile(ndb.Model):
nickname = ndb.StringProperty(required=True)
photo = ndb.BlobProperty()
uuid = ndb.StringProperty(required = True)
I retrieve an image data only uploading image.src via a POST using jquery.ajax(...)
The data are properly sent to Google app engine and I can assign them to a variable with
imagesrc = self.request.get('photosrcbase64')
The data content is something looking like :
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA...
So fare so good, the data is an image/png and the encoding is Base64, but should I care if it end in a Blob ?
Now if I try to put the data in the photo Blob
with for example :
clientprofile.photo = imagesrc I get a Bad Value Error, in this case :
BadValueError: Expected str, got
u'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAA
I tried all kind of combinations using different solutions and get back all kind of BadValue or type errors.
My question are :
1) Why does the Blob care, if it is a binary storage, I should be able to dump in it anything without having to interpret it and/or convert it, so is a Blob really a Binary/Raw storage and why does it care about how things are stored in it ?
2) I have started having problems with this 2 years ago when still using db instead of ndb, I found a solution that I did not understand by stripping out the MIME information at the beginning of the data string, decoding the string Base64 and using db.Blob(...) to convert my string to a Blob. the problem is that db.Blob() does not seem to exist in ndb so I can't do this any more.
I am convinced that I am missing something fundamental in the way informations are exchanged between google app engine and the browser and thank you in advance for a mind clearing answer
A BlobProperty is meant to be binary data. The str type in Python is fully equivalent to a byte string since the only characters allowed are
[chr(byte_value) for byte_value in range(2**8)]
So before storing the value from self.request.get('photosrcbase64'), which is of type unicode, you'll need to cast to type str.
You can do this either by directly doing so
imagesrc = str(self.request.get('photosrcbase64'))
or first trying to decode to ascii.

Google AppEngine datastore config: reusable?

The documentation about datastore config objects confuses me:
"A configuration object can be used any number of times. You must create a separate configuration object for each datastore call that uses it."
(from AppEngine doc)
So can I do something like this:
config = db.create_config(deadline=5)
db.put(someModels, config=config)
db.delete(someKeys, config=config)
Or do I have to do something like this:
config = db.create_config(deadline=5)
db.put(someModels, config=config)
config = db.create_config(deadline=5)
db.delete(someKeys, config=config)
?
Thanks
That is a left-over from when config options were changed by creating a RPC. Each RPC could be used only once. The new datastore Configuration objects can be used multiple times; parameters are now read from them and passed on.
For reference, when settings were passed by creating RPC objects the docs read:
An RPC object can only be used once. You must create a separate RPC object for each datastore call that uses it.

How can I upload a file to a Sharepoint Document Library using Silverlight and client web-services?

Most of the solutions I've come across for Sharepoint doc library uploads use the HTTP "PUT" method, but I'm having trouble finding a way to do this in Silverlight because it has restrictions on the HTTP Methods. I visited this http://msdn.microsoft.com/en-us/library/dd920295(VS.95).aspx to see how to allow PUT in my code, but I can't find how that helps you use an HTTP "PUT".
I am using client web-services, so that limits some of the Sharepoint functions available.
That leaves me with these questions:
Can I do an http PUT in Silverlight?
If I can't or there is another better way to upload a file, what is it?
Thanks
Figured it out!! works like a charm
public void UploadFile(String fileName, byte[] file)
{
// format the destination URL
string[] destinationUrls = {"http://qa.sp.dca/sites/silverlight/Answers/"+fileName};
// fill out the metadata
// remark: don't set the Name field, because this is the name of the document
SharepointCopy.FieldInformation titleInformation = new SharepointCopy.FieldInformation
{DisplayName =fileName,
InternalName =fileName,
Type = SharepointCopy.FieldType.Text,
Value =fileName};
// to specify the content type
SharepointCopy.FieldInformation ctInformation = new SharepointCopy.FieldInformation
{DisplayName ="XML Answer Doc",
InternalName ="ContentType",
Type = SharepointCopy.
FieldType.Text,
Value ="xml"};
SharepointCopy.FieldInformation[] metadata = { titleInformation };
// initialize the web service
SharepointCopy.CopySoapClient copyws = new SharepointCopy.CopySoapClient();
// execute the CopyIntoItems method
copyws.CopyIntoItemsCompleted += copyws_CopyIntoItemsCompleted;
copyws.CopyIntoItemsAsync("http://null", destinationUrls, metadata, file);
}
Many Thanks to Karine Bosch for the solution here: http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/f135aaa2-3345-483f-ade4-e4fd597d50d4
What type of SharePoint deployment and what version of silverlight? If say it is an intranet deployment you could use UNC paths to access your document library in sharepoint and the savefiledialog/openfiledialog available in Silverlight 3.
http://progproblems.blogspot.com/2009/11/saveread-file-from-silverlight-30-in.html
or
http://www.kirupa.com/blend_silverlight/saving_file_locally_pg1.htm
Silverlight has restrictions on what it can do with local files, though I've read that silverlight 4 has some changes.
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/16/silverlight-4-s-new-local-file-system-support.aspx

Resources