MongoDb and Symfony2.4 file is not stored in Gridfs - file

I have implemented the file upload functionality with reference to this link
http://www.slideshare.net/mongodb/mongo-db-bangalore-2012-15070802
But the file is not stored into the Gridfs.
I had done some research for the same and also with reference to this blog post
http://php-and-symfony.matthiasnoback.nl/2012/10/uploading-files-to-mongodb-gridfs-2/
But again, unfortunately, I stuck with this issue since last from 15 days
please help.

Please take a look at KnpLabs/Gaufrette and the related KnpLabs/KnpGaufretteBundle
The Gaufrette bundle provides a level of abstraction around file systems and, it helped me get file-oriented operations up and running quickly. I found it very useful, and in fact the Symfony CMS package leverages this bundle. It may help you out as well.

Related

2SXC/DNN - Delete ADAM Files in Entity

We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.

How is ElasticSearch supposed to work in CakePHP 3?

I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

Exportint Trac wiki to Confluence using UWC tool

I want to create an Export of Trac Wiki and import it to COnfluence using UWC tool.
All Trac Wiki Pages and Attachments are stored in Postgres Database.
Can anyone please let me know how can i create Trac Export which can be readable by UWC. I am confused about TracEnviorment where can i find this?
Thank You,
Akash
Last thing first: If you don't know about a Trac Environment, you'll hardly be fit for the task you're aiming at right now. Start reading a bit about Trac, that is largely self-documenting, could help. No matter from where, be it the 'TracEnvironment' page in your own Trac wiki or the authoritative resource.
OTOH I'm not familiar at all with UWC, but a bit of research revealed, that Trac wiki is supported by a user-contributed module in UWC. So I guess that its documentation page is the natural place to start from. From what I've read so far the export/import is file-based and in line with recommendations under Exporter on that same page.
First, read https://migrations.atlassian.net/wiki/display/UWC/UWC+Trac+Notes.
Second, for attachments, you may need to use this as a reference http://l33t.peopleperhour.com/blog/2014/07/10/converting-a-trac-wiki-to-confluence/, anyway you will need to adapt it to your actual DBMS behind Trac.
Note that the current UWC - Trac integration is not as polished as someone may expect, so prepare yourself to download UWC source, debug it and patch it.

Usage of iBatis in the program

Hello folks,
I want to learn iBatis.I tried running a sample code on internet.But I am getting many exceptions like ClassNOTFoundException,IOException.Please guide me about it.I want to know many things like where should I place my XML files whether under src or under my package or under the project,is any specific installation,setting is required to run the iBatis program.Kindly tell me the resources names which I can refer for my learning.I tried this code.
http://www.roseindia.net/tutorials/ibatis/ibatis-selection.shtml
Unfortunately roseindia's website is not updated and most people who commented on that blog had quite a number off issues with even compiling and executing the codes.
One good place to start learning iBatis even to an expert level is tutorialspoint. You can access their iBatis tutorials using this link http://www.tutorialspoint.com/ibatis/index.htm and you also have an option to download a copy of the entire tutorial in pdf format using this link http://www.tutorialspoint.com/ibatis/ibatis_tutorial.pdf so that you can still read it even while offline. They also provide a variety of other programming tutorials. This is indeed a good place to start.

Resources