It may be unusual to ask for requirements as it's common to find this kind of information on the developers pages. But in this case I'm in doubt if there can be 'undefined' requirements.
Let's see what I've got.
I prepared a complete setup including the DebugKit as you can see on the first image.
Then, totally in accordance with the manuals, i prepared 2 tables, and 'baked all'.
The second image shows a screenshot of the output of the users/index.ctp on my local machine and on another server where everything runs normal.
But, after uploading this to the server where I expect to host the site I got lots of errors and undefined index messages. See the output on the third image.
I have to mention that the server works with PHP 5.3.28 and pdo_enabled and that I've copied all files to the /public_html folder so that there are no special rooting requirements.
So my questions: Are there other requirements to have cakephp 2.5.2 running? What can cause the malfunction?
Many thanks for any hint
first image http://cakephp252.96.lt/img/cwdinstallconfirm.png
second image http://cakephp252.96.lt/img/cupusers.png
failure http://cakephp252.96.lt/img/cwdusers.png
Related
I've been looking for information for a long time and I can't get it. I'm starting to think it can't be done if the .parquet are in Azure DataLake Storage.
I have a folder with subfolders in Azure DataLake Storage. In these subfolders there are many .parquet. I manage to get them out using ListAzureDataLakeStorage + FetchAzureDataLakeStorage combination. Then I try to pass them through a PutDatabaseRecord (which I think is the correct processor for the dump in the DB).
I think I have the PutDatabaseRecord well configured. But when executing it gives me an error: "Failed to process session due to Failed to process StandardFlowFileRecord due to java.lang.NullPointerException: Name is null".
I'm not sure I'm using the PutDatabaseRecord right. I thought that PutDatabaseRecord read the flowfiles that came to it interpreting their content as .parquet (it is supposed to use a ParquetReader as a RecordReader), being able to understand the data as records. But it surprises me that it is not necessary to indicate how to interpret the .parquet, nor how to map its columns with those of the DB table. It still doesn't work as I think and it needs the flowfile content to already arrive as records?
The truth is that I can't explain myself better either because I don't really understand what is considered a record in Nifi or how a record is related to a reading of a .parquet.
Either I am missing a processor or something I am configuring wrong. But the only thing I find is the FetchParquet, which seems to be able to read a .parquet and put it into the FlowFile as records. However, it can only be used with ListHDFS or ListFile, which do not allow me to fetch data from Azure Data Lake Storage
After several tests (using the ConvertRecord and QueryRecord processors), I have come to the conclusion that the problem is in the reading that the ParquetReader does of the content of the FlowFiles that arrive. Well, every processor that needs a ParquetReader gives the same error. Downloading the content of the FlowFile that enters the processor that the ParquetReader uses (whatever it is) and using a .parquet viewer I have verified that this content is fine.
Without knowing what to do, I have attached a screenshot of the specific error. I still don't know what "Name" the error refers to.
Error Name is null
Note: I also posted my problem on Cloudera, perhaps better explained. I leave the link in case someone wants to look at it. (https://community.cloudera.com/t5/Support-Questions/How-can-I-dump-the-parquet-data-that-is-in-Azure/td-p/316020)
In the end, the closest thing to the error I was getting was found here (https://issues.apache.org/jira/browse/NIFI-7817). It seems that it is an error related to the creation of the ParquetReader. This makes sense because it would hit any processor that used a ParquetReader. In addition, the FlowFiles did not even enter the processor that used it.
I was using Nifi version 1.12.1. I have downloaded version 1.13.2 and it no longer gives the Name error. In addition, it is seen that the Flow Files already enter the processor. On the download page of the new version (https://nifi.apache.org/download.html) you can access the Release Notes and the Migration Guidance to know what has been fixed with respect to previous versions and with which processors you have to be careful when migrating.
However, even though the data goes into the processor, it still gives me an error, but it is different and I will open it in another post.
I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!
I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.
i am trying to figure this out for a long time now, but so far no luck, maybe somebody can help me.
I have a 2.2.2 cakephp installed on my computer (localhost) and everything works perfectly. But now i want that same project to be online on remote server. I upload everything, set mysql path but i get a blank page when trying to access the site.
If i upload a fresh cakephp it works, but my project doesnt. The debug is set to default, think that should be 2? I also deleted files in cache/tmp, but still no errors or anything, just blank page.
Any info would be helpful, thank you.
I hate when that happens :). Usually it does if there is an error somewhere and you can't see it because the errors are turned off, so you should call phpinfo() and see if display_errors is on. Changing the debug mode doesn't work every time since display_errors is set from php.ini.
Unfortunately, if this is the problem and you don't have access to edit the php.ini file, you might need to ask the hosting provider to change it and restart the php service.
You can also try this: error_reporting(E_ALL)
I uploaded changes to my cakephp website and discovered that all actions for a particular controller returned a blank page. I discovered what the error was and was able to reproduce it with another controller.
The problem was that in the first line of my controller file I had a space before the opening php tag.
One space cost me hours.
Just uploading all files won't cut it. Make sure you work through this checklist:
First and foremost, check the error log file located under app/tmp/logs/error.log, this usually holds some very good pointers as to what is wrong.
Make sure you have uploaded the app/Config/database.php file with the proper details. Local installs usually have user root without password. Online servers (obviously) do not!
To that extent, also make sure you actually have a database with your hosting provider (either your host sent you the info or you need to create it yourself using their control panel).
Make sure you also uploaded all .htaccess files (the one under the root directory and /app and /app/webroot), some FTP programs don't show this "hidden" file by default.
If all else fails, contact your hosting provider for further support as they usually have access to more verbose server logs that can also hold clues.
The real problem was only the coding I used in notepad++. All my files were encoded with UTF-8, but they should have been UTF-8 without BOM. After I changed it to UTF-8 withot BOM, everything started to work perfectly.
I have an ASP.NET app in which I've used Vertigo's SlideShow 2 silverlight image gallery component. All was working well and the app went through testing and suddenly, after a recent deployment I get an alert box that says:
IMPORTANT: Remove this line from json2.js before deployment.
This pops up after the Silverlight component loads but then the SlideShow2 xap file seems to work fine after that.
Anyone have any ideas on why this would just start happening? I've done some research and can't come up with much and it's pretty mysterious that it just started happening. I've not directly used json2.js in this application nor have I customized the Slideshow 2 component in any way.
It also happens both in my dev and production environments.
-Kevin
Something like this?
From http://tech.groups.yahoo.com/group/json/message/1413:
Thu Dec 10, 2009 5:23 am
The server at JSON.org is getting
hammered. It turns out that there are
some sites that are linking directly
to json2.js instead of dispensing it
from their own servers. By far the
heaviest impact is from
onlinebootycall.com. My intention was
to provide the world with a free
implementation, but the world can buy
its own bandwidth.
So I have added this line as the first
line in the json2.js file:
alert('IMPORTANT: Remove this line
from json2.js before deployment.');
It will not break anything, but it
should help get a message to the
onlinebootycalls that you should not
load code from strange third party
servers. It is not safe.
- "Douglas Crockford" <douglas#...>
Don't link to json.js OR json2.js directly from json.org. It is bad etiquette, it uses their bandwidth for your site.
Copy the file to your own server, remove the line, and redeploy.
p.s. what are you using silverlight for at onlinebootycall.com? Curious... ;)
In doing more research (and taking a step back to evaluate my environment), the implementation of SlideShow2 I am using is a modified one from an opensource CodePlex project. This particular version supports streaming images and albums from Picasa's web albums. The version I'm using is located here: http://slideshow2picasa.codeplex.com/. In checking out their online dmeos they exhibit the same behavior so obviously this implementation is linking to the json2.js file on your servers as a means to interact with Picasa web.
Today I'll take at their code and see if I can rely on a local copy fo json2.js.
Thanks for helping me out.