I'm trying to explore Skipfish by Google
I went through their documentation , and also through the file README-FIRST ( present int eh dictionaries folder)
As far as I could understand , dictionaries are extremely useful for subsequent scans of the same target.
But what I haven't been able to understand so far - is : How is this being achieved ? What's the underlying mechanism that uses the dictionary and in what way ?
I'd really appreciate some help with this
Thanks
The dictionary consist of words (such as 'index' and 'cgi-bin') and extensions (such as 'old' and 'php') that are combined to form filenames skipfish attempts to access as part of a Dictionary attack. For example:
/some/path/index.old
/some/path/index.php
/some/path/cgi-bin.old
While crawling a site, skipfish can add new words to the dictionary as they are discovered in URLs and HTML.
Using a updated dictionary is beneficial because keywords discovered at the end of the first scan would be in the dictionary, ready to be used for probing, from the beginning of subsequent scans (instead of only being available after they are discovered within a scan).
The details of which words are combined with which extensions is discussed in the More about dictionary design section towards the bottom of dictionaries/README-FIRST
Note: I'm not a skipfish expert and you may get a better answer by posting in http://security.stackexchange.com
Also there is an article on skipfish at http://resources.infosecinstitute.com/skipfish-vulnerability-scanner/
Related
My question seems to be pretty straight forward but, I haven't been able to find any solutions to this online. I've looked at a number of different types of objects like DataTables and DataAssets only to realize they are for static data alone.
The goal of my project is to have data-driven configurable assets where we can choose different configurations for our different objects. I have been able to successfully pull JSON data down from the database at run-time but, I would like to save said data to something like a Data Asset or something similar that I can read and write to. So when we pull from said database later we only pull updates to our different configurations and not the entire database (every time at start-up).
On a side note: would this be possible/feasible using an .ini file or is this kind of thing considered too big for something like that (i.e 1000+ json objects)?
Any solutions to this problem would be greatly appreciated.
Like you say, DataTable isn't really usable here. You'll need to utilize UE4's various File IO API utilities.
Obtaining a Local Path
This function converts a path relative to your intended save directory, into one that's relative to the UE4 executable, which is the format expected throughout UE4's File IO.
//DataUtilities.cpp
FString DataUtilities::FullSavePath(const FString& SavePath) {
return FPaths::Combine(FPaths::ProjectSavedDir(), SavePath);
}
"Campaign/profile1.json" as input would result in something like:
"<game.exe>/game/Saved/Campaign/profile1.json".
Before you write anything locally, you should find the appropriate place to do it. Using ProjectSaveDir() results in saving files to <your_game.exe>/your_game/Saved/ in packaged builds, or in your project's Saved folder in development builds. Additionally, FPaths has other named Dir functions if ProjectSavedDir() doesn't suit your purpose.
Using FPaths::Combine to concatenate paths is less error-prone than trying to append strings with '/'.
Storing generated JSON Text Data on Disk
I'll assume you have a valid JSON-filled FString (as opposed to a FJSONObject), since generating valid JSON is fairly trivial.
You could just try to write directly to the location of the full path given by the above function, but if the directory tree to it doesn't exist (i.e., first-run), it'll fail. So, to generate that path tree, there's some path processing and PlatformFile usage.
//DataUtilities.cpp
void DataUtilities::WriteSaveFile(const FString& SavePath, const FString& Data) {
auto FullPath = FullSavePath(SavePath);
FString PathPart, Disregard;
FPaths::Split(FullPath, PathPart, Disregard, Disregard);
IPlatformFile& PlatformFile = FPlatformFileManager::Get().GetPlatformFile();
if (PlaftormFile.CreateDirectoryTree(*PathPart)){
FFileHelper::SaveStringToFile(Data, *FullPath);
}
}
If you're unsure what any of this does, read up on FPaths and FPlatformFileManager in the documentation section below.
As for generating a JSON string: Instead of using the Json module's DOM, I generate JSON strings directly from my FStructs when needed, so I don't have experience with using the Json module's serialization functionality. This answer seems to cover that pretty well, however, if you go that route.
Pulling Textual Data off the Disk
// DataUtilities.cpp
bool DataUtilities::SaveFileExists(const FString& SavePath) {
return IFileManager::Get().FileExists(*FullSavePath(SavePath));
}
FString DataUtilities::ReadSaveFile(const FString& SavePath) {
FString Contents;
if(SaveFileExists(SavePath)) {
FFileHelper::LoadFileToString(Contents, *FullSavePath(SavePath));
}
return Contents;
}
As is fairly obvious, this only works for string or string-like data, of which JSON qualifies.
You could consolidate SaveFileExists into ReadSaveFile, but I found benefit in having a simple "does-this-exist" probe for other methods. YMMV.
I assume if you're already pulling JSON off a server, you have a means of deserializing it into some form of traversable container. If you don't, this is an example from the UE4 Answer Hub of using the Json module to do so.
Relevant Documentation
FFileHelper
FFileHelper::LoadFileToString
FFileHelper::SaveStringToFile
IFileManager
FPlatformFileManager
FPaths
UE4 Json.h (which you may already be using)
To address your side note: I would suggest using an extension that matches the type of content saved, if for nothing other than clarity of intention. I.e., descriptive_name.json for files containing JSON. If you know ahead of time that you will be reading/needing all hundreds or thousands of JSON objects at once, it would likely be better to group as many as possible into fewer files, to minimize overhead.
I have a Solr/Lucene set up where I have indexed a set of documents (MS Word files) and can happily search the content of these documents. However I would like to return a snippet from within the content of the document which shows where the matching line (+/- 5 words from the match term) is. I have tried to follow a range of Google hits but my indexing does not seem to have a direct access to the "content".
Can anyone give me some basic and simple pointers to where I might have made any errors on this - I have based all my work so far on the guidance and examples of the Solr Reference Guide - so I am not sure if the issue is in the search parameters or the original index.
I am doing this to create a clear set of user requirements for building an end solution rather than creating the end solution myself, so I am no expert on the tools and do not need to become one, just need to evidence what is possible with this tool set.
As MatsLindh noted above the issue was that the config was not drawing across the actual content of the Tika parse into a specific field, and so there was no full content of the text to display and highlight
To resolve this I followed the link (https://lucene.apache.org/solr/guide/7_1/uploading-data-with-solr-cell-using-apache-tika.html#configuring-the-solr-extractingrequesthandler) to the guidance documents and reviewed the part on fmap and used the example given for Last Modified Date as a guide on what to apply.
I then went to my solrconfig.xml file in the relevant core folder and added in the following line in the code beneath an already present fmap entry:
<str name="fmap.content">testcontent</str>
I had previously set up the testcontent field under the solr web interface in my core. I then re-ran my indexing line via a command prompt and that seemed to do the trick in terms of pulling out the basic content and rapping it with a basic emphasis.
All thanks for the input on this - still a lot more I want to test to help develop a clear requirement set but this really helps prove some of the basics are not complected.
Stuck at a trivial problem in Grails 3.1.5: Show the fields of a domain object, excluding one of them, including a transient property. Yes, this is my first Grails 3 project after many years with previous versions.
The generated show.gsp contains
<f:display bean="rfaPdffile"/>
This will include a field that may contain megabytes of XML. It should never be shown interactively. The display: false constraint is no longer in the docs, and seems to be silenty ignored.
Next I tried explicitly naming the fields:
<f:with bean="rfaPdffile">
<f:display property='fileName'/>
<f:display property='pageCount'/>
...
</f:with>
This version suprisingly displays the values without any markup whatsoever. Changing display to field,
<f:with bean="rfaPdffile">
<f:field property='fileName'/>
<f:field property='pageCount'/>
...
</f:with>
sort of works, but shows editable values. So does f:all.
In addition I tried adding other attributes to f:display: properties (like in f:table), except (like in f:all). I note in passing that those two attributes have different syntax for similar purposes.
In the Field plugin docs my use case is explicitly mentioned as a design goal. I must have missed something obvious.
My aim is to quickly throw together a prototype gui, postponing the details until later. Clues are greatly appreciated
If I understood you correctly, you want to have all bean properties included in the gsp but the one with the "megabytes of XML" should not be displayed to the user?
If that is the case you can do:
f:with bean="beanName"
f:field property="firstPropertyName"
f:field property="secondPropertyName"
And the one you don't wish to display:
g:hiddenField name="propertyName" value="${beanName.propertyName?}"
f:with
So list all the properties as f:field or f:display and put the one you don't wish to display in a g:hiddenField Grails tag
You can also try:
f:field property="propertyName"
widget-hidden="true"
but the Label is not hidden in this case.
Hope it helps
My own answer: "use the force, read the source". The f:display tag has two rather obvious bugs. I will submit a pull request as soon as I can.
Bugs aside, the documentation does not mention that the plugin may pick up the "scaffold" static property from the domain, if it has one. Its value should be a map. Its "exclude" key may define a list of property names (List of String) to be excluded. This probably works already for the "f:all" tag; bug correction is needed for the "f:display" tag.
My subjective impression is that the fields plugin is in a tight spot. It is intertwined with the Grails architecture, making it sensitive to changes in Grails internals. It is also required by the standard scaffolding plugin, making it very visible. Thus it needs constant attention from maintainers, a position not to be envied. Even now conventions for default constraints seem to have changed somewhere between Grails 3.0.9 and 3.1.7.
Performance of the fields plugin is sensitive to the total number of plugins in the app where it is used. It searches all plugins dynamically for templates.
For the wish list I would prefer stricter tag naming. The main tags should be verbs. There are two main actions, show and edit. For each action there are two main variants, single bean or multiple beans.
My answer is that at present (2 March 2017) there is no answer. I have searched the Net high and low. For the index (list) and create and edit views, the fields plugin works well enough. A certain field can be easily excluded from the create and edit views, relatively easily from the list view (by listing those that should show), and in no way I could find from the show view. This is such a common need that one would suspect it will be addressed soon. Also, easily showing derived values in the show view, like 'total' for an invoice. One can do that by adding an ordered list with a list item showing the value below the generated ordered list of values, but that is kind of a hack.
In some ways, the old way was easier. Yes, it generated long views, but they were generated and didn't have to be done by the programmer - just custom touches here and there.
I'm just beginning the process of exploring i18n in CakePHP and I can't seem to find the right combination of files and functions that will allow me to use multiple po files. If I want to use a single po file (default.po) for every bit of translatable text, that works fine, but I see that becoming an unmaintainable hairball very, very quickly. I've read the docs and the few articles I can find, but none really dive into i18n beyond the trivial use of one .po file.
Here's where I am right now:
I've "baked" my po templates (.pot files) and copied those into app/locale/eng/LC_MESSAGES (I'm not going to be using the default text as the key so that I can easily spot missing keys). For now, I have -views-layouts-default.po and -views-pages-index.po.
In those .po files, I've entered the text I want to use for each key.
In my homepage (views/pages/index.ctp) and default layout (views/layouts/default.ctp) I've wrapped the text key I want to translate with the __() function.
When I load the homepage, though, all I see are they keys. No text has been translated. If I throw up a default.po file, though, any keys I drop in there are populated just fine. I'm clearly missing some piece of the puzzle, but I can't find it. Any help would be much appreciated.
Thanks.
I found the piece I was missing thanks to the CakePHP Google Group. I had been playing with the __d() convenience function, but didn't have a clear picture of how to tie it together to my .po files. The answer is easy once you know it:
The domain translation:
__d ( 'login', 'PLEASE_LOGIN' );
Will look for the "PLEASE_LOGIN" key in the file named login.po. I didn't know (and hadn't read anywhere) that domain == po file name (without extension). Learning that made all the difference.
How do you allow a C-extension to use rb_f_require to require a file from outside the ext directory (e.g. requiring lib/foo/foo.rb from ext/foo.so).
Not really sure why this isn't converted into html like the rest of the ruby hacking guide that had been translated, but perhaps some portion of this would be helpful?
http://rhg.rubyforge.org/svn/en/chapter18.txt
Given that rb_f_require appears to do a normal load path search, it would seem it would search out into lib/foo if that is in the search path. However, if you are looking for another foo.rb I would imagine you would have name problems if foo.so appears first. Perhaps using a different name for foo.rb could solve the problem?