TinyURL widget - shorten your URL's for free!

Enter a long URL to make tiny:

Thursday, November 23, 2017

Complete GNU Autotools shared library module project

What I dislike most about people posting their expertise online, is when they can't be bothered to be thorough and present a complete working anything.


It really reminds me of lonely introverts on StackExchange harassing nice people that just want a quick answer, not a sermon, and not a date.  You may remember when I read an article about lonely introverts and experimented on them by making something they couldn't resist to attack and tell me how smart they were; they couldn't help themselves but set their own hair on fire.  QED


When I solve a piece of technology I really do try to post a completed version of it so people can use it without the same effort needed to reassemble.


When someone boasts about their expertise on a topic, presents half an argument, then <hand wave - hand wave> proclaims "Ta Da!"- it works. The first thought I realize is does this person actually know how to finish this off? Where is the proof?

Leaving out details gives me no confidence in the proclaimer.

Oh yes, in theory, wave some knowledge around, and let you figure out the rest seems "helpful" but it's really more pretentious and pious than practical.

Enough soapbox.

Dynamic Modules Project

I posted on github a complete autotools project that configures a main program that dynamically loads a shareable module from another directory inside a notional apps directory.

Dynamic loadable modules are hard. GNU Autotools are hard. Few people work through an entire example start to finish.  My research led to lots of half-answers and due to the nature of the well-made interfaces of libtool and dlopen/dlsym there are many possible answers that might be the best for your application that won't look like this in the end. That's as far as I will go towards justifying others that present half solutions.  While one should cover a topic, one should also provide some answers to questions at the end.

In this project, the autotools configures shareable modules that have some of the symbol tables contain within the "app" code exposed ( see the .libs/libfoo.exp)  but some other symbols for functions and variable remain closed.

Here is the project tree inside src/:

.
├── apps
│   ├── Makefile
│   ├── Makefile.am
│   ├── Makefile.am~
│   ├── Makefile.in
│   └── test
│       ├── foo1.c
│       ├── foo1.c~
│       ├── libfoo.la
│       ├── libfoo_la-foo1.lo
│       ├── libfoo_la-foo1.o
│       ├── Makefile
│       ├── Makefile.am
│       ├── Makefile.am~
│       ├── Makefile.in
│       ├── test.c
│       └── test.c~
├── dynamic
├── dynamic-main.o
├── main.c
├── Makefile
├── Makefile.am
├── Makefile.in
└── tree.out

 Of course, the internal build is one flavour of the programming philosophy. Some developers imagine keeping the source files pristine and building outside, like RTEMS. That is justified when autotools tends to cache too much data only to ruin work down the road. Of course, one can just toast the entire folder and start over. Each school has merits.

This project is setup somewhat like a distributed, modular software build would look like. Instead of a number of main applications communicating with pipes etc. this project uses self contained "apps" that are dynamically loaded into the running main code. The apps are segmented into a sub-sub folder. The hack is direct knowledge of the shared library underneath. I can solve that too but that distracts from the point of the interfacing. You can get libtool to make .so libraries and not use the libtool lt_dlopen versions. You can configure autotools to control it.


In this case a .so library (libfoo.so) without version numbers is created in src/apps/test that is loaded somewhere else in the build tree.

The main function calls the function using a pointer to function and a variable is passed to the internal function in a closed part of the library for execution.

Notice that while the shared library may be called many times, the internal static int count is always one. That changes the assumed behaviour one might expect.

Wednesday, July 5, 2017

Federal Legal Overreach, Privacy, and Technology

I just watched the debate presented by IQ2: Debating the Constitution: Technology and Privacy and I have some thoughts on privacy rights, technology, and the balance.

I was going to describe this in philosophical rants but decided technical people might find it more interesting.

My first observation is one should never hold a legal-technical debate with either all-lawyers or all-technologists. They seem to think ex-government secretaries and law professors are the ideal representation, in fact neither are, because they are jaded by old precedent and are already present in the law as they sit. The law of the future will be written by those fighting new cases right now.   Privacy/rights arguments are always two-headed and without both expertises present the discussion is as complete and stable as a two-legged chair. One needs to volley from technically feasible to publicly advantageous and back again. One must consider technologies like encryption at the same time as the legal implication for and against it at every turn.  You get no debate without a profound knowledge of both applied equally to each point. So I found the debate lacked real debate.

The debate itself was generally known arguments. It was all plowed ground, no new or interesting spins on interpretation. There is a balance between security urgency for the public good and the civil rights of the individual and company in the long term. The balance must be weighed and the tension considered at every instance where the two competing interests conflict.

So let me share with you my impressions.

Firstly, I start with an expression by Winston Churchill,

"Science bestowed immense new powers on man, and, at the same time, created conditions which were largely beyond his comprehension."

It doesn't matter if you are talking about nuclear fission or radar, or networked computers, mankind builds technology generally with a singular purpose at first, because he is distracted by the considerable task in making it work, and then later, often by people that are not familiar like lawyers, the interpretation as to use or purpose is very different. W. Brian Arthur warned in his book The Nature of Technology that ALL technology has unintended consequences. It has always been this way and probably always will.

Mankind never gets out ahead of his knowledge far enough to look back and pause. I state this as a truism.

With that in mind, what at first is lacking is a direct symbiosis of lawyers and engineers to describe how new technology works. If we had this, a meeting of the minds, lawyers could ask for technology to do things in a certain way that makes it both helpful and private. Then engineers would design it with this end in mind, not as an afterthought.

Instead we have lawyers looking back and asking for descriptive help, can you fix this, can you invade this phone, in ways that aren't necessarily legal or aren't necessarily practical.

What is needed is lawyers to ask for prescriptive help, it would be better if technology arrived at this, or delivered this fact.  This would solve problems before they happened. It is almost a pipe dream in a non-planned capitalistic society to wish for this kind of planning.

So, in reality, we have what Apple and other tech companies do.

They design products now for the single purpose they see, and they react to what the government asks for balancing both what is commercially important to them and within the law.

I called this post federal legal overreach, because that describes perfectly the facts I have laid out. Companies huddle to make something, the government overreaches into that technology, and demands access to some of it a posteriori to the design stage.

The single biggest problem with this state of affairs isn't that both sides are doing the best they can to achieve competing objectives in tension. The problem is that it's always a one-way transaction. If there was a meeting of the minds, one might counterbalance demands with solutions that go farther to meet companies' rights and individual privacy as well.

Instead, there should be a law that obligates the government to cooperate to the same level they expect companies to to safeguard all rights not just in the moment but over the long haul. That would make company objections less potent, because right now they use engineering to avoid lawyers, instead of using lawyers and engineers at the same time to achieve both objectives.

For example, yes it's always compelling when a terrorist phone might break up a terror network and stop an attack. No sane human would object to the characterization this was a security threat and in the public good that government should ask for and get help. What is forgotten long after this one phone hack request is what happens to that data, where is the government obligation to exceed expectations in the safe-keeping and destruction? Lawyers lose evidence, facilities are stolen from. Where is the expectation on the government to fulfill the companies' obligations to clients when they demand access.

Why should a company open a phone wide open forever for a government? Why can't it encrypt and decrypt a copy for a time-limited period that automatically destroys itself? Right now it's an all-or-nothing proposition based on arguments above.

The government's expediency justification would be moot if there was a jointly-designed encryption and storage system made by both the companies and government. If the data was hidden but accessible, then they can spend time and effort on applying to the courts a valid argument instead of drag-netting the internet and bypassing both.

If the US government can store nuclear waste for eons, and collect intelligence from global networks illegally or extra-legally, why can't this same government collect private data in an agreed encrypted form from companies and networks that it can access only in justified cases for limited uses?

With a pre-made solution, a court could rule they can access a phone's contacts or credit card data, or messages, or some or all of the above. With a defined system it can be made specific to limit overreach. The order wouldn't read anything for all time, it could specify from and to time dates.

At the same time, the government access to data would be limited and time-dependent. That would go a long way to assuring that there are few unintended consequences of data spill into the open. They would have to safeguard and account for all data in a responsible and obligated way.

Of course, until they invite new people with new ideas, these debates would remain steeped in the present dogma.




Monday, June 5, 2017

#StackExchange people live in their own fantasy land.


They concoct arbitrary rules which make sense to them, and them alone, to justify processes without any justification.

So if a new user causes problems they eradicate all traces of that person, but if it's an older ( because of earned reputation ) user that's too "disruptive" to punish other users of their fantasy currency. Older users are more likely to agree with their arbitrary rules. New users are not. They won't punish you if some old developer finally has had enough of their process insanity. Were an older user to get fed up, the ripple of discontent in imaginary reward land would be serious and substantial. They couldn't sweep it under the rug.

According to Wikipedia:

A 2013 study has found that 77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions.[27] As of 2011, 92% of the questions were answered, in a median time of 11 minutes.[28] Since 2013, the Stack Exchange network software automatically deletes questions that meet certain criteria, including having no answers in a certain amount of time.

Can anyone one else see the flaw in their logic?

Wednesday, May 24, 2017

Attack of the #StackExchange Process Nazi's

https://stackoverflow.com/questions/44159709/c-implementation-quick-sort?answertab=oldest#tab-top





#stackexchange process nazi's unaware that the objective of a quick sort algorithm is to sort, quickly. Immediate, unhelpful overreaction to question asker and me the attempted helper. 

He just wanted an answer, he didn't provide enough information and my response wasn't in the "correct" spirit of other people's judgement so he couldn't get the help he needed.

Another sad example of petty. lonely introverts with a smidgen of power over others.

This is the explanation for justified heavy-handedness:


In April 2009, Stack Exchange implemented a policy of "timed suspension",[26] in order to curtail users who either show "No effort to learn (the community rules) and improve over time" or engage in "disruptive behavior" and become a nuisance. The suspension is accompanied by temporarily setting the user's reputation score at '1' and a notation on the user's profile page indicating the suspension and remaining duration.

And yet, this user didn't care about any of that, he just needed help.  #Sad

Thursday, April 6, 2017

How to add a full CHECK_HEADER() to configure.ac

Gary Vaughn's ( & Ben Ellison, Tom Tromey & Ian Lance Taylor ) book GNU AUTOCONF, AUTOMAKE, and LIBTOOL  covers a lot of the basics of how one uses GNU tools to automate Richard Stallman's GNU make process. One general criticism is that they don't cover enough of the gritty details to make the book generally useful.  Real developers have real problems in any one project apart from the basics, and if one can't get help with m4 or the underlying autotools process then often people turn to other methods that are easier. Easier but not better, there's a reason why these auto tools have been around since 1980's; they work if you understand them.


RMS 2014

While I have used and lauded the value, moral and commercial, of open source software and GNU in particular, with any new technology that disrupts old technologies we merely change the benefits and problems associated with the new way of business. Read W. Brian Arthur's book The Nature of Technology to understand more.

The single biggest failing of the GNU effort and Stallman's dogma is that unpaid free software doesn't employ people to clean it up and document everything properly, so it leaves the hapless users "free" to do it all themselves. Many free software project developers spend all their time living up to expectations getting the software working, and they often underdocument due to that time constraint. Many are new and not very seasoned. They pick up software ideas at school, full of idealism and vigour,  and believe in it for as long as they are interested in experimental jazz or neutropics.  Then they grow up, get a paying job, or drop out.  Software ebbs and flows and sits idle. Corporate knowledge is lost. People pick up the pieces years later or not. Now, it's up to you.

All that means is when you don't pay someone to increase the usability, you end up paying with your time.



Remember, Linus Torvalds (remember the GNU/Linux saga) is also an open source advocate and he argues projects are 99% perspiration - not "innovation". I would argue Linus is more substantial than any GNU project, even GCC which is considerable. Most small team coder projects, which is the GNU jungle,  end up in the dustbin of history because they don't catch on and maintain momentum. So if you want what they were working on and no one else is doing it then it falls to you.  I am lucky that I have a job where I work at my own pace. I can tinker and figure it out. It is my duty to explain to others not so fortunate because that is the sharing and community commitment of members.


In my case, I work on both Fedora and Ubuntu (Debian) distributions and I use the ATLAS linear algebra software library.  Both distributions bifurcate the locations of the libraries and the #include headers so if I want to use them I need to know where they are.  Adding to the complication, they change the name of the containing library RPM and the development RPM containing the headers. For example, under Ubuntu the library headers are in libXXX-dev RPMs and in Fedora, the library headers are in libXXX-devel RPMs. They install  /usr/include headers but some cases they are under atlas/ directories and other times not. The libraries are in /lib  or /lib64 or /usr/lib or /usr/lib64 and it's not consistent.

In this entry I show a complete AC_CHECK_HEADER example to find one header, the atlas_pthreads.h

Inside configure.ac you include the m4 defined macro AC_CHECK_HEADER  with an environmental variable:

AC_CHECK_HEADER([atlas_pthreads.h], [AC_DEFINE([HAVE_ATLAS_PTHREADS_H],[1],[[Checking for 1 level atlas.h header]])], [] ) 

You will need to add extra versions for any permutation of header location. You increase the CONSTANT_NAME_H appropriately and please make it intuitive( as in linked mentally to the reason for the constant). You will thank yourself later.  In case it's located inside the atlas/ directory you would need another one like this:

AC_CHECK_HEADER([atlas/atlas_pthreads.h], [AC_DEFINE([HAVE_ATLAS_ATLAS_PTHREADS_H],[1],[Checking for base level atlas.h header ])], [] ) 

This one above changes by constant value to one inside the atlas folder.

When you think about it, why this wasn't done in the 2001 edition of a book covering GNU autotools is quite shocking.  The software was mature by then. The lack of depth of Appendix D is really a disservice to those that paid $40.00 for it.

We don't use the last argument [ if-not-found] action because we are going to call other check header macros and include many variables so one of them will appear and the rest will not get defined. Since we only include them on action if found, they won't appear in config.h and in Makefile without a successful file location. The defined value largely doesn't matter in this technique. There are other ways to use that defined value but for this case you need #ifdef to find a constant.

Inside any .h file one would include these #ifdef macros:

#include "config.h"


#ifdef HAVE_ATLAS_PTHREADS_H
#include <atlas_pthreads.h>
#endif
#ifdef HAVE_ATLAS_ATLAS_PTHREADS_H
#include <atlas/atlas_pthreads.h>
#endif

I always forget to include the config.h header to start, and then when I compile the first time, it hits me when GCC doesn't understand any variables it should. You should always make header missing warnings an error so you don't get mistaken versions library references to another mistaken linked library that might be in some other software. There are ways around it but don't take the temptation. You will thank yourself later when you avoid phantom errors.


Here's a snippet of my latest configure.ac in case these help anyone:




# Checks for header files.
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h stdio.h unistd.h  error.h ])
AC_CHECK_HEADERS([stdlib.h string.h unistd.h glib.h glibconfig.h math.h])
AC_CHECK_HEADERS([freeglut.h])
AC_CHECK_HEADERS([freeglut_ext.h])
AC_CHECK_HEADERS([freeglut_std.h])
AC_CHECK_HEADERS([glut.h])
AC_CHECK_HEADERS([glu.h])
AC_CHECK_HEADERS([gl.h])
AC_CHECK_HEADERS([gl/glut.h])
AC_CHECK_HEADERS([GL/freeglut.h])
AC_CHECK_HEADERS([GL/gl.h])
AC_CHECK_HEADERS([GL/glu.h])
AC_CHECK_HEADERS([GL/glut.h])
AC_CHECK_HEADERS([GL/freeglut.h])
AC_CHECK_HEADERS([GL/freeglut_ext.h])
AC_CHECK_HEADERS([GL/freeglut_std.h])
AC_CHECK_HEADERS([GL/glew.h])
AC_CHECK_HEADERS([GL/wglew.h])

AC_CHECK_HEADER([atlas.h], AC_DEFINE([HAVE_ATLAS_H],[1],[ [Checking for top level atlas.h header] ]), [])
AC_CHECK_HEADER([atlas/atlas.h], AC_DEFINE([HAVE_ATLAS_ATLAS_H],[1],[ [Checking for 1 level atlas.h header] ]), [])
AC_CHECK_HEADER([atlas_buildinfo.h], AC_DEFINE([HAVE_ATLAS_BUILDINFO_H],[1],[ [Checking for top level buildinfo.h header] ]), [])
AC_CHECK_HEADER([atlas/atlas_buildinfo.h], AC_DEFINE([HAVE_ATLAS_ATLAS_BUILDINFO_H],[1],[ [Checking for 1 level atlas_buildinfo.h header] ]), [])
AC_CHECK_HEADER([atlas/atlas_cacheedge.h], AC_DEFINE([HAVE_ATLAS_ATLAS_CACHEEDGE_H],[1],[ [Checking for 1 level atlas_cacheedge.h header] ]), [])
AC_CHECK_HEADER([atlas_pthreads.h], [AC_DEFINE([HAVE_ATLAS_PTHREADS_H],[1],[[Checking for 1 level atlas.h header]])], [] ) 
AC_CHECK_HEADER([atlas/atlas_pthreads.h], [AC_DEFINE([HAVE_ATLAS_ATLAS_PTHREADS_H],[1],[Checking for 1 level atlas.h header ])], [] ) 
AC_CHECK_HEADER([atlas/clapack.h], [AC_DEFINE([HAVE_ATLAS_CLAPACK_H],[1],[[Checking for 1 level atlas/clapack.h header]])], [])
AC_CHECK_HEADER([atlas/atlas_f77.h], [AC_DEFINE([HAVE_ATLAS_F77_H],[1],[[Checking for 1 level atlas_f77.h header]])], [] )
AC_CHECK_HEADER([atlas/blas.h], AC_DEFINE([HAVE_ATLAS_BLAS_H],[1],[ [Checking for 1 level atlas/blas.h header] ]), [])
AC_CHECK_HEADER([atlas/cblas.h], AC_DEFINE([HAVE_ATLAS_CBLAS_H],[1],[ [Checking for 1 level cblas.h header] ]), [])
AC_CHECK_HEADER([blas.h], AC_DEFINE([HAVE_BLAS_H],[1],[ [Checking for top level blas.h header] ]), [])
AC_CHECK_HEADER([lapack.h], AC_DEFINE([HAVE_LAPACK_H],[1],[ [Checking for top level lapack.h header] ]), [])
AC_CHECK_HEADER([cblas.h], AC_DEFINE([HAVE_CBLAS_H],[1],[ [Checking for top level cblas.h header] ]), [])
AC_CHECK_HEADER([clapack.h], AC_DEFINE([HAVE_CLAPACK_H],[1],[ [Checking for top level clapack.h header] ]), [])

AC_CHECK_HEADER([protobuf-c.h],[AC_DEFINE([HAVE_PROTOBUF_C_H],[1],[ [Checking for top level protobuf-c.h header] ])],[])
AC_CHECK_HEADER([protobuf-c/protobuf-c.h],[AC_DEFINE([HAVE_PROTOBUF_C_PROTOBUF_C_H],[1],[ [Checking for 1 level protobuf-c.h header] ])],[])
AC_CHECK_HEADER([google/protobuf-c/protobuf-c.h],[AC_DEFINE([HAVE_GOOGLE_PROTOBUF_C_PROTOBUF_C_H],[1],[ [Checking for 2 level protobuf-c.h header] ])],[])

# try to find libray in usual spaces - then insert Env Vars to indicate linkable libraries
AC_CHECK_LIB([m],[fabs],[],[echo fabs not found] )
# OpenGL libraries
AC_CHECK_LIB([glew],[glewInit],[AC_SUBST([HAVE_LIBGLEW],[-lglew])],[echo glew not found] )
AC_CHECK_LIB([GLEW],[glewInit],[AC_SUBST([HAVE_LIBGLEW],[-lGLEW])],[echo GLEW not found] )
AC_CHECK_LIB([glut],[glutSolidSphere],[AC_SUBST([HAVE_LIBGLUT],[-lglut])],[echo glut not found] )
AC_CHECK_LIB([glu],[gluPerspective],[AC_SUBST([HAVE_LIBGLU],[-lglu])],[echo glu not found] )
AC_CHECK_LIB([GLU],[gluPerspective],[AC_SUBST([HAVE_LIBGLU],[-lGLU])],[echo glu not found] )
AC_CHECK_LIB([gl],[glViewport],[AC_SUBST([HAVE_LIBGL],[-lgl])],[echo gl not found] )
AC_CHECK_LIB([GL],[glViewport],[AC_SUBST([HAVE_LIBGL],[-lGL])],[echo GL not found] )


Sunday, March 12, 2017

My new hero, Robert D French

https://github.com/robertdfrench/cmake-refuckulator


Here is my new hero, he met a piece of technology he hated because it didn't work as advertised and he griped about it - a lot. But while he was whining and quitting smoking he wrote an anti-technology to negate the effects of a CMake - a build system I don't think is very good either.  Well played!

His sacrifice and profanity will now help me remove the odious CMake from any project I deem unhelpful.

Thank you!

Friday, January 20, 2017

Here's how not to make your software library useful

You lock your functional code that does useful work into an archaic and unique GUI interface.  Leveraging code works best when you separate special functions  from interface. That way it's portable.

Now a developer saving your library from obsolescence has to either get the other interface working or rip that part out to save the important critical specialized contribution that you worked on for a considerable time.



To show you whereof I speak, here's a tree(-L 1)  of my source code directory;

.
├── 2007-12-17-freedius-linux.tar.gz
├── 32db.xml
├── ADMS-master
├── ADMS-master.tar.gz
├── aio
├── aio1
├── aio1.c
├── aio3
├── aio3.c
├── aio4.c
├──  aiocat.c
├── aio_main.c
├── aiopipe.c
├── apue.2e
├── apue.linux.tar.gz
├── APUE-src.2e.tar.gz
├── atprio.1.c
├── atprio.2.c
├── atprio.3.c
├── autoconf-archive
├── autogen-5.18.7
├── autogen-5.18.7.tar.gz
├── bar
├── behave
├── bin
├── binary
├── binary.c
├── bitcoin-miner
├── blahtexml-master
├── bugfixes2.html
├── bugfixes3.html
├── bugfixes4.html
├── bugfixes5.html
├── bugfixes6.html
├── bugfixes.css
├── bugfixes.html
├── build
├── build-doxample
├── build-hello
├── camwire2-1.9.4-Source
├── camwire2-1.9.4-Source.tar.gz
├── CANBus
├── carmen-0.7.4-beta
├── carmen-0.7.4-beta.tar.gz
├── cexp
├── cexp-2.2.1.tgz
├── cexp-2.2.2.tar.bz2
├── cexp-2.2.3.tar.bz2
├── cexp-2.2.tgz
├── cexp-CEXP_Release_2_2
├── cexp-CEXP_Release_2_2_1
├── cexp.tar.bz2
├── chroma.h
├── contiki_rosnode-master
├── contiki_rosnode-master.zip
├── coq-8.4pl6
├── correlator
├── cpubench.c
├── cpubound.c
├── cvodes-2.8.2
├── cvodes-2.8.2.tar.gz
├── cvs_archive.tar.gz
├── cvs_backup
├── date +%s-libdrdc.tar.bz2
├── def-guide-to-linux-network-programming-master
├── def-guide-to-linux-network-programming-master.tar.gz
├── demo-udp-01.tar.gz
├── dempstershaferlib-read-only
├── dol-ethz
├── doxample-0.1
├── doxample-0.1.1.tar.gz
├── doxample-0.1.tar.gz
├── DroneDynamics
├── encrypted-session-nginx-module-master
├── encrypted-session-nginx-module-master.zip
├── epubcheck-3.0.1
├── epubcheck-src-3.0.1
├── examples-v2-4.10.2
├── examples-v2-4.10.2.tar.bz2
├── expat-expat.tar.gz
├── fifo.c
├── finbot-0.1-30
├── fix
├── flann-1.8.4-src.zip
├── freedius-cmucl-x86.tar.gz
├── gazebo-2.2.3.tar.bz2
├── gazebo-7.1.0
├── gazebo-7.1.0.tar.gz
├── gazebo_position2d
├── gcc.out
├── Getopt-Gen-0.10
├── gfsm-0.0.15-1
├── gfsm-0.0.15-1.tar.gz
├── gfsmxl-0.0.13
├── gfsmxl-0.0.13.tar.gz
├── git
├── git-access-token-wider
├── git-access-token-wider~
├── github-git-cheat-sheet.pdf
├── github-libdrdc
├── github-recovery-codes.txt
├── gldemo.c
├── gldraw.c
├── gnu_select.c
├── gnu_select.c~
├── good_src
├── grip-3.3.1
├── grip-3.3.1.tar.gz
├── gts_0.7.6+darcs121130.orig.tar.gz
├── helloworld_cc-0.5
├── helloworld_cc-0.5-doxygen.tar.bz2
├── helloworld_cc-0.5.tar.gz
├── ignition-math-1.0.0
├── ignitionrobotics-ign-msgs-07607e5f1e77
├── ignitionrobotics-ign-msgs-07607e5f1e77.tar.bz2
├── index.html
├── install-meshlab.sh
├── install-meshlab.sh~
├── iobound.c
├── iopipe.c
├── ipc-3.9.1
├── ipc-3.9.1a.tar.gz
├── jhead-2.97
├── jhead-2.97.tar.gz
├── jhsample.tex
├── jitter.p4.c
├── kompozer
├── kompozer-0.8b3.en-US.gcc4.2-i686.tar.gz
├── lambda-master
├── lambda-master.zip
├── LENS
├── lens
├── levmar-2.6
├── levmar-2.6.tar.gz
├── lib3ds-1.3.0
├── lib3ds-1.3.0.tar.gz
├── lib7zip-1.6.5
├── libdc1394-2.2.1
├── libdrdc
├── libdrdc.tar.bz2
├── libEasySM_0_6.bz2
├── libhungarian-0.1.tar.gz
├── libinetsocket.c
├── libmozjs.so
├── libnmea-0.5.3
├── libnmea-0.5.3.tar.gz
├── libpico.tar.gz
├── libposemath-2014.04.29
├── libposemath-2014.04.29.tar.gz
├── libringbuffers-0.2.0
├── libringbuffers-0.2.0-4-gb1fa8b7
├── libringbuffers-0.2.0-4-gb1fa8b7.tar.gz
├── libringbuffers-0.2.0.tar.gz
├── libstate-0.1
├── lighttpd-1.4.40
├── lighttpd-1.4.40.tar.gz
├── lighttpd-1.4.41.tar.gz
├── localizer
├── make_archive.sh
├── marrspace
├── marrspace-0.1
├── marrspace-0.1.1.tar.bz2
├── marrspace-0.1.tar.bz2
├── marrspace-code
├── marrspace-code~
├── marrspace-code~~
├── marrspace-code2.tar.gz
├── marrspace-code.tar.bz2
├── math-tester
├── meshlab
├── MeshLabSrc_AllInc_v133.tgz
├── MESHLAB.tar.gz
├── MESSAGES
├── MESSAGES~
├── MESSAGES~~
├── meta-autonomy
├── minerd
├── minerd-ubuntu
├── MISC
├── mksem.c
├── mml.html
├── mn0x.png
├── mn1x.png
├── mn2x.png
├── mn3x.png
├── mn4x.png
├── mn5x.png
├── mn6x.png
├── mn.css
├── monad.c
├── MP4Joiner-2.1.2
├── MP4Joiner-2.1.2.tar.gz
├── mpi-examples
├── msg.c
├── muparser_v2_2_3
├── muparser_v2_2_3.tar.gz
├── nanomsg-0.6-beta
├── nanomsg-0.6-beta.tar.gz
├── NetlistViewer-0.1
├── netstat
├── network-demos-4.10.2
├── network-demos-4.10.2.tar.bz2
├── network-status.txt
├── Neural Networks at your Fingertips
├── new
├── Newserver
├── nginx-48bab8b83f4e
├── nginx-dev
├── nginx_udplog_module-1.0.0
├── ngspice-26
├── ngspice-26.tar.gz
├── ngx_devel_kit-master
├── ngx_http_stat_module-master
├── noswitch.c
├── ntree
├── ObjectCore
├── ObjectCore.tar.bz2
├── octave-3.6.4
├── octave-3.6.4.tar.bz2
├── ogl_fps_controls.zip
├── old
├── ompi-master
├── ompi-master.zip
├── OpenCV
├── opencv-1.1.0
├── opencv-1.1pre1.tar.gz
├── OpenGL
├── opengl-floatingcamera.c
├── opengl-floatingcamera.c~
├── openmpi-1.6.5
├── openmpi-1.6.5.tar.bz2
├── openmpi-2.0.1
├── openmpi-2.0.1.tar.bz2
├── oregano-master
├── p4code.zip
├── p4src
├── PDP
├── pdptool.zip
├── periodic_timer.c
├── periodic_timer.p4.c
├── PID-master
├── PID-master.tar.bz2
├── PixyMon
├── player-1.6.5
├── player-1.6.5.tar.gz
├── player-2.0.5
├── player-2.1.3
├── player-2.1.3.tar.bz2
├── player-3.0.1
├── player-3.0.2
├── player-3.0.2.tar.bz2
├── player-3.0.2.zip
├── pooler-cpuminer-2.4-linux-x86.tar.gz
├── portals4
├── portals4-1.0a1
├── posemath
├── productArray.c
├── protobuf-c-1.0.2
├── protobuf-c-1.0.2.tar.bz2
├── protobuf-c-1.1.0
├── protobuf-c-1.1.0.tar.gz
├── protobuf-c-1.1.1
├── protobuf-c-1.1.1.tar.gz
├── protobuf-c-master
├── protobuf-c-master.tar.gz
├── protobuf-c-rpc-master
├── protobuf-c-rpc-master-1.tar.bz2
├── protobuf-c-rpc-master.tar.bz2
├── protobuf-c-text-1.0.0
├── protobuf-c-text-1.0.0.tar.gz
├── protobuf-c-text-master
├── protobuf-c-text-master.zip
├── qucs-0.0.19
├── qucs-0.0.19-160204-git-83cc216.tar.gz
├── README
├── ringbuffer
├── rmsem.c
├── ros_messages
├── ROS.NET-master
├── ROS.NET-master.zip
├── RTEMS
├── rtl
├── rtl-host
├── saph62d_6s.tar.gz
├── Saphira-8.4-1.i386.rpm
├── scalable-vector-graphics-svg
├── seamonkey
├── seamonkey-2.17.1.tar.bz2
├── sending_recving_sigs_self.c
├── sending_sigs.c
├── sending_sigs_self.c
├── server
├── server2.tar.gz
├── server2.zip
├── server.zip
├── set_var
├── shm.c
├── shmmutex_flock.c
├── shmmutex_sem.c
├── shmmutex_semembed.c
├── Sigil-master
├── Sigil-master.tar.gz
├── sigs_sent_noswtch.c
├── sigs_sent_swtch.c
├── sigs_sent_swtch.p4.c
├── sk1-0.9.3
├── sk1-0.9.3.zip
├── src
├── startcoin-master
├── startcoin-master.tar.gz
├── Stellar_1.0
├── Stellar_1.0.tar.gz
├── store
├── stratum-mining-master
├── stream-echo-nginx-module-master
├── stream-echo-nginx-module-master.tar.gz
├── string_example
├── Structure Synth Source Code
├── StructureSynth-Source-v1.5.0.tar.gz
├── svg-edit-2.6
├── svgedit_app
├── switch.c
├── task-nginx-udp-log-module-master
├── task-nginx-udp-log-module-master.tar.gz
├── tcl8.3.5
├── tcl8.3.5-src.tar.bz2
├── temp
├── tensorflow-master
├── tensorflow-master.tar.gz
├── test
├── test_gnu
├── test-rpc.c
├── tex4ht-1.0.2009_06_11_1038
├── tex4ht-all.tar.gz
├── tex4ht-env-unix.txt
├── tex4ht-env-win32.txt
├── tex4ht-lit.zip
├── tex4ht-svn
├── tex4ht.zip
├── texmf
├── tk8.3.5
├── tk8.3.5-src.tar.bz2
├── tlpi-160726-dist.tar.gz
├── tlpi-dist
├── tpl-master
├── tpl-master.tar.gz
├── tree
├── tree.c
├── tree.out
├── trunk
├── trustengine-read-only
├── tth_C
├── tth_C.tar.gz
├── Turing_machine_simulator_(C)
├── Turing_machine_simulator_(C).tar.gz
├── ultra-new
├── Untitled Folder
├── Untitled Folder 2
├── va_arg
├── vcglib
├── vec_projection.m
├── velodynedriver
├── velodynedriver-1.0
├── velodynedriver-1.0.1
├── velodynedriver-1.02
├── velodyne_xml_parser.c
├── ver62
├── vislib-V1.9.4.tar.gz
├── vsched.c
├── worker_posix.h
├── workspace
└── xMind

172 directories, 200 files

I've worked with over 150 open source projects.