Table for Longest prefix matching - c

I want to build a kernel module which will maintain his own table for carrying out longest prefix match. For this purpose, I am trying to use linux's "include/net/ip6_fib.h". Is it possible to meet the required functionality through this? Am I on right path?
If answer is YES then can anyone tell me some good resource which explains the IP6 FIB API?
Thnaks in advance.
Regards

I have very little kernel space experience, but from the user land perspective I know that Linux routing tables can be independent of the routing of packets. If I understood your question correctly, that is what you were asking.
AFAIK Linux routing only uses tables that are defined in the "routing policy" i.e., they have routing rules referencing them. The Linux default is to use tables "local" "main" and "default" (table numbers 255, 254 and 253). So if you put your routes in e.g. table number 100, they will not be used by the regular routing mechanisms, unless you add a rule referencing the table. You can see the routing "policy" by typing ip rule list. This can also be learned from the "official" iproute documentation which has not been updated since Alexey N. Kuznetsov wrote it at the turn of the millennium.
However, I dont know how this affects prefix matching and the ip6_fib api, since route lookups are cached (outside the routing tables). You can see the routing cache by typing ip route list table cache. (side note: an indepth look at the routing cache can be found at http://vincent.bernat.im/en/blog/2011-ipv4-route-cache-linux.html)
I can not provide any help on using the FIB inside the kernel, which was your 2nd inquiry. I can only venture that the kernel source is the most up to date documentation and most probably also the only documentation.

Related

How is ElasticSearch supposed to work in CakePHP 3?

I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

Create custom queue

I've been looking into making a new queue strategy for my asterisk installation, my first project is to join the features of leastrecent and roundrobin in one queue.
I've found a lot of 3rd party callcenter solutions, but haven't been able to determine if any of them uses other strategies than the standards.
So far my thought is that i have to create my own module that adds the functionality. The documentation on creating modules is scarce, besides a well written guide by Russel Bryant.
Is it possible to make some sort of an extention to an existing module, or would i have to replace et completely?
Is there documentation of any sort about creating your own queue strategy ?
i'm running asterisk 11
Sure you can change queue.
Read apps/app_queue.c and extend it as needed. If you have enought skill to extend and TEST queue(multithreaded app), then have be no any issues read app_queue.c
Other solution is use AMI with AsyncAGI call.
http://www.moythreads.com/wordpress/2007/12/24/asterisk-asynchronous-agi/
PS. if you have question like that it is highly not recommended create callcenter. Read more books about asterisk and hire hi-skilled expert to help you. Otherwise very likly CC will not work ok under load.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Apache module FORM handling in C

I'm implementing an Apache 2.0.x module in C, to interface with an existing product we have. I need to handle FORM data, most likely using POST but I want to handle the GET case as well.
Nick Kew's Apache Modules book has a section on handling form data. It provides code examples for POST and GET, which return an apr_hash_t of the key+value pairs in the form. parse_form_from_POST marshalls the bucket brigade and flattens it into a buffer, while parse_form_from_GET can simply reference the URL. Both routines rely on a parse_form_from_string routine to walk through each delimited field and extract the information into the hash table.
That would be fine, but it seems like there should be an easier way to do this than adding a couple hundred lines of code to my module. Is there an existing module or routines within apache, apr, or apr-util to extract the field names and associated data from a GET or POST FORM into a structure which C code can more easily access? I cannot find anything relevant, but this seems like a common need for which there should be a solution.
I switched to G-WAN which offers a transparent ANSI C scripts interface for GET and POST forms (and many other goodies like charts, GIF I/O, etc.).
A couple of AJAX examples are available at the GWAN developer page
Hope it helps!
While, on it's surface, this may seem common, cgi-style content handlers in C on apache are pretty rare. Most people just use CGI, FastCGI, or the myriad of frameworks such as mod_perl.
Most of the C apache modules that I've written are targeted at modifying the particular behavior of the web server in specific, targeted ways that are applicable to every request.
If it's at all possible to write your handler outside of an apache module, I would encourage you to pursue that strategy.
I have not yet tried any solution, since I found this SO question as a result of my own frustration with the example in the "Apache Modules" book as well. But here's what I've found, so far. I will update this answer when I have researched more.
Luckily it looks like this is now a solved problem in Apache 2.4 using the ap_parse_form_data funciton.
No idea how well this works compared to your example, but here is a much more concise read_post function.
It is also possible that mod_form could be of value.

Resources