waf - build works, custom build targets fail - c

The waf command waf build shows compiler errors (if there are any) while waf debug or waf release does not and always fails, utilizing the following wscript file (or maybe the wscript file has some other shortcomings I am currently not aware of):
APPNAME = 'waftest'
VERSION = '0.0.1'
def configure(ctx):
ctx.load('compiler_c')
ctx.define('VERSION', VERSION)
ctx.define('GETTEXT_PACKAGE', APPNAME)
ctx.check_cfg(atleast_pkgconfig_version='0.1.1')
ctx.check_cfg(package='glib-2.0', uselib_store='GLIB', args=['--cflags', '--libs'], mandatory=True)
ctx.check_cfg(package='gobject-2.0', uselib_store='GOBJECT', args=['--cflags', '--libs'], mandatory=True)
ctx.check_cfg(package='gtk+-3.0', uselib_store='GTK3', args=['--cflags', '--libs'], mandatory=True)
ctx.check_cfg(package='libxml-2.0', uselib_store='XML', args=['--cflags', '--libs'], mandatory=True)
ctx.check_large_file(mandatory=False)
ctx.check_endianness(mandatory=False)
ctx.check_inline(mandatory=False)
ctx.setenv('debug')
ctx.env.CFLAGS = ['-g', '-Wall']
ctx.define('DEBUG',1)
ctx.setenv('release')
ctx.env.CFLAGS = ['-O2', '-Wall']
ctx.define('RELEASE',1)
def pre(ctx):
print ('Building [[[' + ctx.variant + ']]] ...')
def post(ctx):
print ('Building is complete.')
def build(ctx):
ctx.add_pre_fun(pre)
ctx.add_post_fun(post)
# if not ctx.variant:
# ctx.fatal('Do "waf debug" or "waf release"')
exe = ctx.program(
features = ['c', 'cprogram'],
target = APPNAME+'.bin',
source = ctx.path.ant_glob(['src/*.c']),
includes = ['src/'],
export_includes = ['src/'],
uselib = 'GOBJECT GLIB GTK3 XML'
)
# for item in exe.includes:
# print(item)
from waflib.Build import BuildContext
class release(BuildContext):
cmd = 'release'
variant = 'release'
class debug(BuildContext):
cmd = 'debug'
variant = 'debug'
Error resulting from waf debug :
Build failed
-> task in 'waftest.bin' failed (exit status -1):
{task 46697488: c qqq.c -> qqq.c.1.o}
[useless filepaths]
I had a look at the waf demos, read the wafbook at section 6.2.2 but those did not supply me with valuable information in order to fix this issue.
What's wrong, and how do I fix it?

You need to do at least the following:
def configure(ctx):
...
ctx.setenv('debug')
ctx.load('compiler_c')
...
Since the cfg.setenv function resets whole previous environment. If you want to save previous environment, you can do cfg.setenv('debug', env=cfg.env.derive()).
Also, you don't need to explicitly specify the features = ['c', 'cprogram'], since, it's redundant, when you call bld.program(...).
P.S. Don't forget to reconfigure after modifying wscript file.

Related

Snmpset not working on an agent generated by mib2c

I generated code from MIB file with mib2c. When I try to set object with read-write access, it returns Error in packet. Reason: notWritable (That object does not support modification.
I tried to run my subagent with few debug flags. I found out that not a single function generated code is called on snmpset request, only on snmpget. smnpget on exactly same OID will return valid value. I have user with RW access everywhere. I can set value to sysName.0 with same user. I tried removing MIB file and use exact oid but had same result.
Because It's not even reaching code, I don't know much what to do.
I tried it with 2 tables generated same way.
One table has index as IMPLIED DisplayString and second table has INDEX as combination of 2 INTEGERs.
EDIT:
I found out that it created .conf file in /var/lib/snmp/ for each my agent. I tried to add create_user with same name & password but it disappeared after agent was started again.
EDIT2:
Code was generetad using mib2c.mfd.conf . I tried mib2c.iterate.conf and it called function from generated code. It's not working with mib2c.mfd.conf but looks like it will work with mib2c.iterate.conf . I would like to be able make it works with mib2c.mfd.conf so I wouldn't need to change all subagents.
Output from my subagent where 3.fw is index:
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
agentx/subagent: handling AgentX request (req=0x1f9,trans=0x1f8,sess=0x21)
agentx/subagent: -> testset
snmp_agent: agent_sesion 0xc4a08 created
snmp_agent: add_vb_to_cache( 0xc4a08, 1, MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw, 0x3d3d0)
snmp_agent: tp->start MSE-CONFIGURATION-MIB::mseDpuConfigActivationTable, tp->end MSE-CONFIGURATION-MIB::mseDpuConfigActivation.3,
agent_set: doing set mode = 0 (SET_RESERVE1)
agent_set: did set mode = 0, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc4a08
snmp_agent: agent_session 0xc4a08 released
snmp_agent: end of handle_snmp_packet, asp = 0xc4a08
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1f9,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: handling AgentX request (req=0x1fa,trans=0x1f8,sess=0x21)
agentx/subagent: -> cleanupset
snmp_agent: agent_sesion 0xc7640 created
agent_set: doing set mode = 4 (SET_FREE)
agent_set: did set mode = 4, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc7640
snmp_agent: agent_session 0xc7640 released
snmp_agent: end of handle_snmp_packet, asp = 0xc7640
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1fa,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
Values/config used for generating code:
## defaults
#eval $m2c_context_reg = "netsnmp_data_list"#
#eval $m2c_data_allocate = 0#
#eval $m2c_data_cache = 1#
#eval $m2c_data_context = "generated"# [generated|NAME]
#eval $m2c_data_init = 1#
#eval $m2c_data_transient = 0#
#eval $m2c_include_examples = 1#
#eval $m2c_irreversible_commit = 0#
#eval $m2c_table_access = "container-cached"#
#eval $m2c_table_dependencies = 0#
#eval $m2c_table_persistent = 0#
#eval $m2c_table_row_creation = 0#
#eval $m2c_table_settable = 1#
#eval $m2c_table_skip_mapping = 1#
#eval $m2c_table_sparse = 1#
#eval $mfd_generate_makefile = 1#
#eval $mfd_generate_subagent = 1#
SNMPd version:
# snmpd --version
NET-SNMP version: 5.9
Web: http://www.net-snmp.org/
Email: net-snmp-coders#lists.sourceforge.net
I found out that in generated file *_interface.c from mib2c.mfd.conf template, there is inverted check.
#if !(defined(NETSNMP_NO_WRITE_SUPPORT) || defined(NETSNMP_DISABLE_SET_SUPPORT))
HANDLER_CAN_RONLY
#else
HANDLER_CAN_RWRITE
#endif /* NETSNMP_NO_WRITE_SUPPORT || NETSNMP_DISABLE_SET_SUPPORT */
I removed ! from condition and it stared working. Both defines are undefined so it should use HANDLER_CAN_RWRITE but because of wrong check it used HANDLER_CAN_RONLY.

Juli tomcat 7 logging.properties

I have a jar that contains a CustomLoginModule to perform JAAS Autherization. This jar is located in ${CATALINA_BASE}/lib . I have an org.apache.juli.logging.Log object that performs logging in the module on different levels. I would like to have a log file e.g. jaas.log where the module's logs be written instead of having them in the Catalina.out or catalina.log. Here's my logging.properties file, with this configuration I am able to create the jaas.log but it stays empty and all log goes to Catalina.out and catalina.log can anybody help with this? Thank you very much.
logging.properties file:
handlers = 1catalina.org.apache.juli.FileHandler,
2localhost.org.apache.juli.FileHandler,
3manager.org.apache.juli.FileHandler, 4host-
manager.org.apache.juli.FileHandler, 5jaasauth.org.apache.juli.FileHandler
#, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler,
5jaasauth.org.apache.juli.FileHandler
#.handlers = 1catalina.org.apache.juli.FileHandler,
java.util.logging.ConsoleHandler, 5jaasauth.org.apache.juli.FileHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = FINE
1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = FINE
2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = FINE
3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.FileHandler.prefix = manager.
4host-manager.org.apache.juli.FileHandler.level = FINE
4host-manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.FileHandler.prefix = host-manager.
5jaasauth.org.apache.juli.FileHandler.level = ALL
5jaasauth.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
5jaasauth.org.apache.juli.FileHandler.prefix =jaas-auth.
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers =
2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].
[/manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].
[/manager].handlers = 3manager.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-mana
ger].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-
manager].handlers = 4host-manager.org.apache.juli.FileHandler
com.mymodule.apps.orchestration.CustomLdapLoginModule.level = ALL
com.mymodule.apps.CustomLdapLoginModule.handlers =
5jaasauth.org.apache.juli.FileHandler

Anaconda (Python) + Cmder (Clink) on Windows - Unify Dueling Custom Prompts

If you run Anaconda on windows, you have an activate.bat file which concludes with this line to put your current conda env on the prompt:
set PROMPT=[%CONDA_DEFAULT_ENV%] $P$G
If you run cmder on windows, there is a nice lua script to customize your prompt:
function lambda_prompt_filter()
clink.prompt.value = string.gsub(clink.prompt.value, "{lamb}", "λ")
end
clink.prompt.register_filter(lambda_prompt_filter, 40)
These two scripts do not play very nicely with each other. Clink has an API that seems like I could use to incorporate the change from activate.bat, but I cannot figure out how to call it from a batch file.
My overall goal is to merge these two prompts into the nicer Cmder-style. My thought is to create an environment variable, change activate.bat to check for the existence of the variable, and, if so, call the Clink API to change the prompt instead of set PROMPT. At that point I would think I could create a new filter to cleanly merge the value in. I can't figure out how to call the API from the batch file, though.
Other solutions welcome.
EDIT: Partial, non-working solution
require "os" -- added to top of file, rest in filter function
local sub = os.getenv("CONDA_DEFAULT_ENV")
if sub == nil then
sub = ""
end
print(sub)
clink.prompt.value = string.gsub(clink.prompt.value, "{conda}", sub)
I added a {conda} in the prompt definition at the very beginning; removed the prompt statement from activate.bat, and added this to git_prompt_filter. Prior to using activate, everything is fine - the {conda} gets suppressed by the ''. However, if I use activate and switch into a folder with a git repo to trigger the change, I see:
{conda}C:\...
Does os.getenv not get user set variables? Don't know what else the problem would be. I also tried adding a print, it doesn't print out the contents of CONDA... either.
This is what I do to reset the prompt and add the conda env name to the prompt:
---
-- Find out the basename of a file/directory (last element after \ or /
-- #return {basename}
---
function basename(inputstr)
sep = "\\/"
local last = nil
local t={} ; i=1
for str in string.gmatch(inputstr, "([^"..sep.."]+)") do
--t[i] = str
--i = i + 1
last = str
end
return last
end
---
-- Find out current conda env
-- #return {false|conda env name}
---
function get_conda_env()
env_path = clink.get_env('CONDA_DEFAULT_ENV')
if env_path then
basen = basename(env_path)
return basen
end
return false
end
---
-- after conda activate: reset prompt and add conda env name
---
function conda_prompt_filter()
-- reset to original, e.g. after conda activate destroyed it...
if string.match(clink.prompt.value, "{lamb}") == nil then
-- orig: $E[1;32;40m$P$S{git}{hg}$S$_$E[1;30;40m{lamb}$S$E[0m
-- color codes: "\x1b[1;37;40m"
cwd = clink.get_cwd()
prompt = "\x1b[1;32;40m{cwd} {git}{hg} \n\x1b[1;30;40m{lamb} \x1b[0m"
new_value = string.gsub(prompt, "{cwd}", cwd)
clink.prompt.value = new_value
end
-- add in conda env name
local conda_env = get_conda_env()
if conda_env then
clink.prompt.value = string.gsub(clink.prompt.value, "{lamb}", "["..conda_env.."] {lamb}")
end
end
clink.prompt.register_filter(conda_prompt_filter, 10)
I want to build on #Jan-Schulz answer since it didn't exactly work for me in April 2017.
Instead of editing cmder/vendor/clink/clink.lua I added custom code into cmder/config/prompt.lua which is not overwritten on upgrade (You can add additional modifications to cmder prompt in this file as well using the clink lua api)
I was having an issue where the {lamb} was not being replaced with the proper λ character so I added another filter to run at the end of all processing.
---
-- Find out the basename of a file/directory (last element after \ or /
-- #return {basename}
---
function basename(inputstr)
sep = "\\/"
local last = nil
local t={} ; i=1
for str in string.gmatch(inputstr, "([^"..sep.."]+)") do
--t[i] = str
--i = i + 1
last = str
end
return last
end
---
-- Find out current conda env
-- #return {false|conda env name}
---
function get_conda_env()
env_path = clink.get_env('CONDA_DEFAULT_ENV')
if env_path then
basen = basename(env_path)
return basen
end
return false
end
---
-- after conda activate: reset prompt and add conda env name
---
function conda_prompt_filter()
-- reset to original, e.g. after conda activate destroyed it...
if string.match(clink.prompt.value, "{lamb}") == nil then
-- orig: $E[1;32;40m$P$S{git}{hg}$S$_$E[1;30;40m{lamb}$S$E[0m
-- color codes: "\x1b[1;37;40m"
cwd = clink.get_cwd()
prompt = "\x1b[1;32;40m{cwd} {git}{hg} \n\x1b[1;30;40m{lamb} \x1b[0m"
new_value = string.gsub(prompt, "{cwd}", cwd)
clink.prompt.value = new_value
end
-- add in conda env name
local conda_env = get_conda_env()
if conda_env then
clink.prompt.value = string.gsub(clink.prompt.value, "{lamb}", "["..conda_env.."] {lamb}")
end
end
function fix_lamb()
if string.match(clink.prompt.value, "{lamb}") ~= nil then
clink.prompt.value = string.gsub(clink.prompt.value, "{lamb}", "λ")
end
end
clink.prompt.register_filter(conda_prompt_filter, 1)
clink.prompt.register_filter(fix_lamb, 999)
Why not just delete the line from activate.bat and do all the logic in your cmder profile? CONDA_DEFAULT_ENV will be empty if no environment is active.

Waf:Create custom parallel tasks

In Waf how can I create multiple custom tasks, that can run parallel (with --jobs=JOBS)?
Sources = ["C:\\src1.c", "C:\\Mod1\src2.c", ... 30pcs] # one per call
Incl_Paths = ["Mod1". "Mod1"] # list all of them in all call
INCL_ST = "-I%s" # how to format an include path in an argument
Ext_out = "_loc" # output file extension
The goal:
C:\\LOC.exe -IMod1 -IMod2 C:\\src1.c > build\\src1.c_loc //or better src1_loc
C:\\LOC.exe -IMod1 -IMod2 C:\\Mod1\src2.c > build\\src2.c_loc //or better src2_loc
...
I couldn't get it work
def build(bld):
for i in Sources:
bld.new_task_gen(
source = i,
rule='C:\\LOC.exe ${INCL_ST:Incl_Paths} ${SRC} > ' + i + Ext_out,
)
Also I couldn't extract the exe
# find_program(self, filename, path_list=[], var=None, environ=None, exts=''):
cfg.find_program("C:\\LOC.exe", var='LOC')
To change from:
rule='C:\\LOC.exe ...'
To:
rule='${LOC} ...'
Something like this should work with waf 1.7:
from waflib.Task import Task
from waflib.TaskGen import extension
Ext_out = "_loc" # output file extension
def configure(conf):
# loc.exe must be in the system path for this to work
conf.find_program(
'loc',
var = "LOC",
)
conf.env.Incl_Paths = ["Mod1", "Mod1"]
conf.env.INCL_ST = "-I%s"
#extension('.c')
def process_loc(self, node):
out_node = node.change_ext(Ext_out)
tsk = self.create_task('loc')
tsk.set_inputs(node)
tsk.set_outputs(out_node)
class loc_task(Task):
ext_in = ['.c']
ext_out = ['_loc']
run_str = "${LOC} ${INCL_ST:Incl_Paths} ${SRC} > ${TGT}"
def build(bld):
bld(source = ["src1.c", "src2.c"])
Well it works for me on linux faking loc ...

In Django how can I run a custom clean function on fixture data during import and validation?

In a ModelForm I can write a clean_<field_name> member function to automatically validate and clean up data entered by a user, but what can I do about dirty json or csv files (fixtures) during a manage.py loaddata?
Fixtures loaded with loaddata are assumed to contain clean data that doen't need validation (usually as an inverse operation to a prior dumpdata), so the short answer is that loaddata isn't the approach you want if you need to clean your inputs.
However, you probably can use some of the underpinnings of loaddata while implementing your custom data cleaning code--I'm sure you can easily script something using the Django serialization libs to read your existing data files them in and the save the resulting objects normally after the data has been cleaned up.
In case others want to do something similar, I defined a model method to do the cleaning (so it can be called from ModelForms)
MAX_ZIPCODE_DIGITS = 9
MIN_ZIPCODE_DIGITS = 5
def clean_zip_code(self, s=None):
#s = str(s or self.zip_code)
if not s: return None
s = re.sub("\D","",s)
if len(s)>self.MAX_ZIPCODE_DIGITS:
s = s[:self.MAX_ZIPCODE_DIGITS]
if len(s) in (self.MIN_ZIPCODE_DIGITS-1,self.MAX_ZIPCODE_DIGITS-1):
s = '0'+s # FIXME: deal with other intermediate lengths
if len(s)>=self.MAX_ZIPCODE_DIGITS:
s = s[:self.MIN_ZIPCODE_DIGITS]+'-'+s[self.MIN_ZIPCODE_DIGITS:]
return s
Then wrote a standalone python script to clean up my legacy json files using any clean_ methods found among the models.
import os, json
def clean_json(app = 'XYZapp', model='Entity', fields='zip_code', cleaner_prefix='clean_'):
# Set the DJANGO_SETTINGS_MODULE environment variable.
os.environ['DJANGO_SETTINGS_MODULE'] = app+".settings"
settings = __import__(app+'.settings').settings
models = __import__(app+'.models').models
fpath = os.path.join( settings.SITE_PROJECT_PATH, 'fixtures', model+'.json')
if isinstance(fields,(str,unicode)):
fields = [fields]
Ns = []
for field in fields:
try:
instance = getattr(models,model)()
except AttributeError:
print 'No model named %s could be found'%(model,)
continue
try:
cleaner = getattr(instance, cleaner_prefix+field)
except AttributeError:
print 'No cleaner method named %s.%s could be found'%(model,cleaner_prefix+field)
continue
print 'Cleaning %s using %s.%s...'%(fpath,model,cleaner.__name__)
fin = open(fpath,'r')
if fin:
l = json.load(fin)
before = len(l)
cleans = 0
for i in range(len(l)):
if 'fields' in l[i] and field in l[i]['fields']:
l[i]['fields'][field]=cleaner(l[i]['fields'][field]) # cleaner returns None to delete records
cleans += 1
fin.close()
after = len(l)
assert after>.5*before
Ns += [(before, after,cleans)]
print 'Writing %d/%d (new/old) records after %d cleanups...'%Ns[-1]
with open(fpath,'w') as fout:
fout.write(json.dumps(l,indent=2,sort_keys=True))
return Ns
if __name__ == '__main__':
clean_json()

Resources