Can we use DeliveryConfirmationv4 API for international label generation? - usps

In our system we are using USPS for shipping.
My question is, if we also want to do international shippments and we are using the DeliveryConfirmationV4 API to generate the label, can we use the same API for International shipping as well. Or do we have to use the ExpressMailIntl API?

You will have to use the ExpressMailIntl API as there are different requirements in terms of data needed.

Related

IBM Watson Service for identifying particular characteristic such as "helpfulness" in person's tweets or facebook posts?

I am currently exploring three services for identifying person's tweets or facebook post's are helpfulness or not:
Personality Insights
Natural Language Understanding
Discovery
will I need to write my on wrapper on these services to identify the helpfulness characteristic or is there any other way to just query & get result.
can anyone please guide which service I need to use for this task
Thanks
According to Neil, sure, all depends on how you define helpfulness.
Discovery:
If you want use Discovery you need some base to get the data, you can filter the data about you want with filter. By using data analysis combined with cognitive intuition to take your unstructured data and enrich it so you can discover the information you need.
Personality:
If you want use Personality, understand personality characteristics, needs, and values in written text. The service uses linguistic analytics to infer individuals' intrinsic personality characteristics, including Big Five, Needs, and Values, from digital communications such as email, text messages, tweets, and forum posts.
Watson Knowledge Studio:
If you want to work with models for tweets, you can use WKS (Watson knowledge Studio), this service provides easy-to-use tools for annotating unstructured domain literature and uses those annotations to create a custom machine-learning model that understands the language of the domain. The accuracy of the model improves through iterative testing, ultimately resulting in an algorithm that can learn from the patterns that it sees and recognize those patterns in large collections of new documents. For example, if you want learn about car, you can simple give some models to WKS.
It all depends on how you define helpfulness. Whether it is in general, or helpful to answering a question etc.
For Personality Insights, have a look at https://www.ibm.com/watson/developercloud/doc/personality-insights/models.html which has all the traits, as well as what they mean. The closest trait to helpfulness is probably Conscientiousness.
Neil

How does Zapier/IFTTT implement the triggers and actions for different API providers?

How does Zapier/IFTTT implement the triggers and actions for different API providers? Is there any generic approach to do that, or they are implemented by individual?
I think the implementation is based on REST/Oauth, that is generic from high level to see. But for Zapier/IFTTT, it defines a lot of trigger conditions, filters. These conditions, filters should be specific to different provider. Is the corresponding implementation in individual or in generic? If in individual, there must be a vast labor force. If in generic, how to do that?
Zapier developer here - the short answer is, we implement each one!
While standards like OAuth make it easier to reuse some of the code from one API to another, there is no getting around the fact that each API has unique endpoints and unique requirements. What works for one API will not necessarily work for another. Internally, we have abstracted away as much of the process as we can into reusable bits, but there is always some work involved to add a new API.
PipeThru developer here...
There are common elements to each API which can be re-used, such as OAuth authentication, common data formats (JSON, XML, etc). Most APIs strive for a RESTful implementation. However, theory meets reality and most APIs are all over the place.
Each services offers its own endpoints and there are no commonly agreed upon set of endpoints that are correct for given services. For example, within CRM software, its not clear how a person, notes on said person, corresponding phone numbers, addresses, as well as activities should be represented. Do you provide one endpoint or several? How do you update each? Do you provide tangential records (like the company for the person) with the record or not? Each requires specific knowledge of that service as well as some data normalization.
Most of the triggers involve checking for a new record (unique id), or an updated field, most usually the last update timestamp. Most services present their timestamps in ISO 8601 format which makes parsing timestamp easy, but not everyone. Dropbox actually provides a delta API endpoint to which you can present a hash value and Dropbox will send you everything new/changed from that point. I love to see delta and/or activity endpoints in more APIs.
Bottom line, integrating each individual service does require a good amount of effort and testing.
I will point out that Zapier did implement an API for other companies to plug into their tool. Instead of Zapier implementing your API and Zapier polling you for data, you can send new/updated data to Zapier to trigger one of their Zaps. I like to think of this like webhooks on crack. This allows Zapier to support many more services without having to program each one.
I've implemented a few APIs on Zapier, so I think I can provide at least a partial answer here. If not using webhooks, Zapier will examine the API response from a service for the field with the shortest name that also includes the string "id". Changes to this field cause Zapier to trigger a task. This is based off the assumption that an id is usually incremental or random.
I've had to work around this by shifting the id value to another field and writing different values to id when it was failing to trigger, or triggering too frequently (dividing by 10 and then writing id can reduce the trigger sensitivity, for example). Ambiguity is also a problem, for example in an API response that contains fields like post_id and mesg_id.
Short answer is that the system makes an educated guess, but to get it working reliably for a specific service, you should be quite specific in your code regarding what constitutes a trigger event.

SiteCatalyst (Omniture) Order of Operations: Tagging vs. Configuration?

What is best to do first: configure Sitecatalyst (Omniture) within the platform (so naming the reports/variables), or deploying the tags?
You can get some standard reports by just tagging for the standard variables (pagename, products and events like scAdd, purchase etc) but you have to define the metrics your company specifically needs at some point to get the most out of Analytics to report against KPIs etc.
It is important to understand (on paper etc) the business KPIs and reports your business needs to see, then define/configure the variables (eVars/events/Props) so that they support the KPIs/Reports, then do tagging to support these variables, then when you have data in the system design the reports/dashboards in Analytics (SiteCatalyst). Then iterate over this lots of times.
I would answer "at the same time". In case of standard implementation, configuration would happen a bit earlier, in case of usage of Context Data Variables it can go the opposite way, but as Crayon mentioned, the reports doesn't make sense until you do both of the activities.
And after all I would highly recommend to the the analysis and documentation before both of these steps.

Interfacing with a Hardware Security Module on Linux

I have to work with an HSM device for security requirements in my project. I am confused about how HSM is interfaced with C on a Linux machine.
How does a user access HSM internal memory for performing different operations with it?
Every HSM vendor supports at least one cryptographic API. PKCS#11 is a particularly common choice, but there are many other options. OpenSSL, for example, supports HSMs through an engine interface.
Often the vendor will expose a proprietary API in addition to the "standard" APIs it implements. The proprietary API typically offers a greater degree of control over key security properties and key usage than is possible to express in the standard APIs.
When using an HSM, one typically issues a command to load a key from a secure store and retrieve a handle to the key object. This handle is the layer of abstraction that allows the HSM to perform the key operations securely without exposing the key material.
With regards to your project, it is important that you don't simply "shove" the HSM somewhere in your solution to make it appear secure. Instead, think long and hard about the security properties of your system and how cryptography may help you defend against attacks. Once you've identified your attack vectors (and your associated cryptographic defences), then consider which cryptographic API can support your use cases. Only then should you select the best vendor from those who support that API.
In my experience, the standard APIs only suffice for simple security systems. For complex projects, it's almost always necessary to work with the proprietary API of a particular vendor. In such cases, lean heavily on the vendor for support and proof-of-concepts before settling on a product that truly meets your needs.
I know this is a year old, but in case someone else runs across it, there is a more detailed discussion at this link:
Digital Signing using certificate and key from USB token
Including some long-form working code that I added. You are also welcome to get my code directly at this link: https://github.com/tkil/openssl-pkcs11-samples
Good luck!
The HSM vendor should have provided you a library. You can use this library to interact with your HSM via PKCS#11 interface. You will need the PKCS#11 header files in you project in order to do that.
Check out this site http://www.calsoftlabs.com/whitepapers/public-key-cryptography.html to get a introduction

in what language should the API be written?

We want to implement an API, we have a database located on a central server, and a network of many computers.
On these computers, several local programs will be developed in the future using different programming languages, some in java, some in perl, C++, ... etc.
These local programs should be able to access the API functions and interact with the database.
So in what language should the API be written ? so that it would be binding to the other languages. Is there any specific architecture that should be implemented ?
Is there any link that would provide useful information about this ?
If the API is pure database access, then a REST web service is a reasonable choice. It allows a (reasonably) easy interface from almost any language, and allows you to choose whatever language you feel is best for writing the actual web service. However, in doing it this way, you're paying the cost of an extra network call per API call. If you put the web service on the same host (or local network) as the database, you can minimize the cost of the network call from the web service to the database, which mitigates the cost of the extra call to the API.
If the API has business logic in it, there's two via approaches...
You can write the API as a library that can be used from multiple languages. C is a good choice for this because most languages can link C libraries, but the languages you expect to use it from can have a large impact, too. For example, if you know it's always going to be used by a language hosted on the JVM, the any JVM language it probably a reasonably good choice.
Another choice is to use a hybrid of the two. A REST API for database access, plus a business layer library written in multiple languages. The idea being that you have business logic on the application end, but it's simple enough that you can write a "client library" in multiple languages that knows how to call out to the REST API and then apply business logic to the results it gets back. Assuming the business logic isn't too complex (ie, limited to ways to merge/view the database data), then this isn't a bad solution.
The benefit is that it should be relatively easy to supply one "default" library that can be used by many languages, plus other language specific versions of the library where you have time available to implement them. For cases where figuring out what calls need to be made to the database, and how to combine the results, can be complicated, I find this to be a reasonably good solution.
I would resort to webservices. Doesn't matter what language you use as long as you have a framework to interact with webservices you are good. Depending on your needs you could expose a simple REST API or go all the way with SOAP/WSDL and the likes.

Resources