what is the best way to debug vCloud client REST applications? - vcloud-director-rest-api

I'm building a vClould client application via the REST APIs, however, the documentation is inconsistent an in some cases just wrong and misleading.
All I really need is a solid debug tool or even a log file. Any recommendations?

You already mentioned you have access to the message stream, which is one of the first steps. Typically if I'm using the Apache HttpClient/HttpComponents I'll go increase the log level so it logs the full HTTP requests.
My next step is usually to cheat and to log into vCD as a system administrator and see what's going on. When vCD was designed there was a very deliberate decision to not reveal infrastructure level problems to tenants of the cloud (normal org users or org admins), as that would break the cloud abstraction. Sadly, that means as an org-level user you're often going to get "contact your cloud admin" error responses. We are aware that this isn't ideal and try to find ways to make it better when we can (IIRC the new 5.5 release that was announced last month does have some improvements in that area).
The last step is usually to cheat even more and to look at the server side logs (vcloud-container-debug.log, specifically). That usually gives me a better clue as to what went wrong. Of course, you may be unlucky and not have access to the vCD cell machine.
My workaround in the latter two cases is to try the operations via the vCD UI and see (1) if they work as expected and (2) if they do, to check the system state via the API and see if I'm sending the wrong request payloads, etc. because the doc or schema reference may not have been clear enough.
In regards to the documentation, please use the feedback links () found on individual doc pages to let us know! Our technical writer reviews all the feedback and tries to address them.
My final suggestion is that you might want to post API questions to the vCloud API community forum VMware has. There are a number of experts (both users and VMware employees) that monitor it and respond to questions.

Related

Detect user location in React app, Can Fastly be used with Non Server Side Rendering react app?

this is my first task of detecting users' geo locations and I am a fairly new dev.
The app uses React and backend is node.js.
Currently we have some functions that calls an api which returns users' locations.( this takes a while)
But, two other options right now is use:
Geolocation API <--- this might need users' permission?
Fastly
For Fastly, I am asking
Does it work with non server side rendering app?
For production site, we have fastly set up in route53. but need to ask devops for staging environment. ( I got this info from others but do not know what that means )
Can someone even explains to me how fastly work and what needs to be set up?
Basically any information is appreciated. I do not know what should be googled to find out the answers.
Thanks.
If you have Fastly fronting your app, then YES you can definitely use Fastly to provide geolocation information.
Just to be clear (as you mentioned you were unfamiliar with Fastly and more generally are a "new dev"), when I say "fronting your app" I mean: when a client (e.g. a user's web browser) makes a request for https://yourapp.com/, does the request first get routed through Fastly? If it does, then Fastly will proxy the request through to your app and any data you send back through Fastly to the client will likely be cached to make future requests for all your users much quicker (this is one of the many functions Fastly provides).
Fastly has lots of products, but for your primary purposes there are two platform services Fastly offers:
Content Delivery (CDN) which is built on Varnish/VCL (if your ops team already has Fastly setup then this is likely what they have).
Compute#Edge which is built upon WebAssembly.
I would highly recommend reading the following resources to understand more about the Fastly platform options:
Content Delivery with VCL
Content Delivery with Compute#Edge
As far as using Fastly to handle geolocation information, I'll point you to the following resources:
https://developer.fastly.com/solutions/examples/geo-ip-api-at-the-edge
https://developer.fastly.com/solutions/examples/decorating-origin-requests-with-geoip
Also search the following page for references to "geolocation" as there are quite a few 'examples' that you might be interested in:
https://developer.fastly.com/solutions/examples/
I would also suggest having a play around with https://fiddle.fastly.dev which let's you use either VCL or any of the supported Compute#Edge languages to test out ideas without needing to have a real Fastly service setup. This will give you a chance to trial out some geolocation code.
Lastly, you can also have a read through the first half of https://www.integralist.co.uk/posts/fastly-varnish/ which covers some basics about Fastly's use of Varnish/VCL (but I'd suggest reading the official references, linked above, first).
Any other questions, then please feel free to reach out to support#fastly.com who will be happy to help.

Has the JCA Ever Worked? It can't according to its documentation

I am working on a JCA implementation of Jackrabbit with a custom JAAS login module. The idea is to integrate Apache Shiro authentication and authorization via the login module, using a RepositoryLoginContext that simply furnishes user name and password from a Shiro token for the callback functions.
(I have only a small number of users, which are configured with a shiro.ini file.)
All the pieces seem to fit until I try connecting to the repository. One attempt is
SimpleCredentials userJCACredentials = new SimpleCredentials(username,shriroUserCreds.getPassword().toString());
but I cannot seem to find a build path JAR that makes it happy. The Jackrabbit API docs have me flummoxed. If I look for SimpleCredentials, I get a Day Software page (!) however if I look for CryptedSimpleCredentials, I get an Apache page. Thinking that perhaps only the crypted version can be used with the resource adapter, I tried changing to that but run into the same problem with
connInfo = new JCAConnectionRequestInfo(cryptedCreds,workspaceName)
which only wants SimpleCredentials in the first argument. I keep finding dead ends in the API, such as JCAConnectionRequestInfo(Credentials creds, String workspace). If you click on the Credentials link, it times out. Another gem is one of JCAManagedConnectionFactory's constructors, which has text about IBM Websphere (!!).
I tried writing my own class (based onsimple credentials) implementing Credentials interface and the error with
new JCAConnectionRequestInfo(cryptedCreds,workspaceName)
turned into inability to resolve javax.jcr.Credentials. With a Jackrabbit installation, the javax.jcr path does not exist (at least for the resource adapter version).
Failing to make any sense of the foregoing, I tried a second approach.
repoParameters.put("homeDir",ARCHIVE_REPO_DIR);
repoParameters.put("configFile",ARCHIVE_REPO_CONFIG);
repoMan = JCARepositoryManager.getInstance();
repo = repoMan.createRepository(repoParameters);
The last line, using a Map argument, was prompted by Eclipse auto completion, in conflict with the documentation showing createRepository( string, string ). In any case, an error about resolution of javax.jcr.Repository appears. Back to the same stuff.
I've explored every bottom up path in the libraries to get a Session. It seems to be impossible, ultimately failing for a non-existent definition.
Looking the source code, I've assembled the following
JCAManagedConnectionFactory mcf = new JCAManagedConnectionFactory();
mcf.setConfigFile(...);
mcf.setHomeDir...);
try {
mcf.setLogWriter(lpw);
connectionFactory = mcf.createConnectionFactory();
} catch (ResourceException rex) {
logger.error("client session failed to create connection factory");
logger.error(rex);
} finally {
success = false;
}
if (success) {
repo = (RepositoryImpl) connectionFactory;
session = repo.login( needs Credentials Here );
}
The login call needs Credentials and a workspace name. If the login is going to be handled by JAAS, I'd expect to find a login() with no arguments wherein the JAAS login would take over.
I used jackrabbit a couple of years ago in a project and had some small success with it but then moved from jackrabbit to modeshape which is an alternative implementation of the JCR (JSR-283. This is a very active project in which I have had a (very small) involvement. Development continues at a great rate with new releases regularly coming out.
I started with version 2.8 which wasn't bad but the 3.0 release was an almost complete rewrite with a focus on performance and was a delight to use. 4.0 is now in the last stages of being released.
The changeover from Jackrabbit (2.2.7) to Modeshape (2.8.1) was relatively painless with most of the effort in setting up the configuration and runtime.
I'm not saying you won't find problems like you are seeing in Jackrabbit but if you ask a question in the modeshape forums you will get a good answer and plenty of ongoing help.
Lessons Learned:
Integrating Shiro was not a great idea, but not at all because of Shiro. The JCA requires JAAS login. "Integrating" Shiro requires enough understanding of JAAS that I might as well just adopt JAAS and be done with it. But getting further along, I realized that it was fine. There are larger issues ahead.
JAAS is, well, JAAS. I was far from able to breeze through it and pick up what I needed although I have gotten pretty far. The answer to the last dilemma in the post is simply that the login method called is not found in the JCA, it's found in the JAAS login module that you code and "register" through the Geronimo admin console by creating a new security real for you application. A large part of the JAAS learning curve is that it the model is very abstract. There is no "JAAS in a nutshell for Geronimo" to enlighten would-be programmers. Another part is that it is so widely used, information specific to everything but what I'm doing seems ubiquitous. It isn't surprising because web apps are by far it's greatest usage. And following that, it is not really surprising to find a JAAS login module with Jackrabbit as the JCR was envisioned for web content.
It turns out that burrowing into JAAS, though enlightening, was a complete waste of time.
After putting the effort into getting JAAS together, I finally had a Jackrabbit Session Handle. I immediately put it to work on a utx wrapped piece of old code.
Things stopped cold here. None of my old code worked, starting with getRootNode(). I let Eclipse show me the possible methods for a session handle and I tested each one. The list of those that work is short. Some UTX related stuff works, as does "equals", hasCapabilty, isLIve, and little else. Sixty plus methods of the Session Handle cannot work. THIS MEANS THAT THERE IS NO WAY TO GET A NODE. WHICH MEANS YOU CANNOT DO ANYTHING USEFUL WITH THE JCA.
As to my original question, I will say that at one point the JCA probably did work. After all, no one would code all that stuff just to have it lined out. And I will also say that whenever the code was modified to use JAAS, it was badly broken and rendered useless.
This really is a soapbox moment. The amount of time I invested in this was significant. One reason I wanted to use Jackrabbit was that it could run in the same JVM as my application, in Geronimo. I can understand documentation that isn't up to par and other realities of open source. But I could only speculate about how this happened, and the alternatives are all pretty negative. I smelled a rat earlier but said nothing.
One thing is clear. The JCA (and for all I know, the rest of the Jackrabbit project) has no business being touted as an Apache Project. It is a disservice to their name and an extreme disservice to developers who've been led down a dead end path.
I am not sure how to go about it, but I want to bring this to the attention of the Apache foundation if for no other reason than preventing anyone else from going through this.

What can I do with generated error logs?

I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.

Drupal 7, Domain Access, and SSO (Single Sign-On)

Has anyone made any headway with coming up with a single sign on solution
with Domain access to date for Drupal 7? I've been looking closely at two old
modules, one no longer maintained (SSO for D6) and one still maintained (CAS). I've also read that SAML might be a key to unlocking this, but am uncertain.
Facebook's FBConnect might be another option too or another way could be integrating OpenID from what I've read, and experienced on StackOverflow's sub sites.
I know that OpenID can do this since we are logged into all of *Overflows sub sites at the same time using one login. The question is how does it cross DNS servers? Does it handshake with one half of a matching hash? I cannot find any documentation on this, so am at a loss.
So, are there any solutions that are known to date, or information on what to start
looking into? I think I've made a good point at the possibilities. I read this thread, Domain Access SSO but am uncertain to what version it pertains to (Drupal. DA, SSO or otherwise). It looks like the "Solution" is to create a master table set with users and permissions, then share those across the domains? How might this work if there are already multiple sites created under Domain Access? Would you clone and rebuild the entire installation, or would you need to start from scratch? It really raises more questions than answers. I contacted the author with no response, so the questions still stand.
Any opinions out there on the who what or why would be greatly appreciated, I just need a start point to get the ball rolling. Thanks everyone.
I'm the author of the Domain Access SSO article mentioned in the original question. I don't recall being contacted about it, but then again I recently learned that my "contact" page on bleen.net hasn't been working in a while... but anyway, here is a bit of info:
That post referred to Drupal 6, SSO Module 6.x-1.0-rc1, and Domain Access module 6.x-2.0 (I think). That solution basically revolves around creating two separate drupal installs, one the master and one the client (there can be multiple clients). Basically, what happens is the necessary user tables for all teh clients are pointed instead to the master. In doing so, the master becomes (essentially) a shell site that does nothing but hold and verify user data.
Hope that makes sense and/or helps... to be honest i havent looked at that code in a long while now.
SAML is a good option. Check this module to integrate it with drupal:
http://drupal.org/project/simplesamlphp_auth
If you need a demo with this plugin working check this.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Resources