Category Archives: Articles

Visualization and Simulation of Membrane Computing

Abstract

Context/Background

Membrane computing is an unconventional computing system in the area of natural computing which builds a model of computation from the function and structure of living cells; it is massively parallel, distributed, and non-deterministic. Research in this field is gaining interest and thus better tools for simulation and visualization of membrane system and their algorithms are required.

Aims

We will pioneer new methods and new tools for specification, simulation and visualization of membrane systems on electronic computers, in addition to suggesting new data structures for simulation, and new paradigms for membrane specification.

Method

We will devise a language to specify Membrane Computing systems, a parser to translate this specification into a membrane structure stored in memory, and a simulator/interpreter to perform all the calculations associated with a transition membrane system using a fast linear algorithm.

Proposed Solution.

We will deliver a specification for a ‘Membrane Language’, implemented in a fashion similar to that of Object Orientated Programming.

Simulator/Interpreter, implementing fast linear algorithms and data structures for tracking membrane computation.
A visualization engine rendering the state of a membrane system, in addition to providing precise control over playback and debugging features.

Design:Visualization and Simulation of Membrane Computing

Video of Prototype :http://www.youtube.com/watch?v=n4ZitkGDpbo

TimSort

Some of you many know I have a bit of an obesstion with sorting algorithms ^_^

Anyway their is a revolutionary algorithm come to greater light, developed in 2002 by Tim Peters and called the TimSort 😛 which is a adaptive, stable, natural mergesort. It has supernatural performance on many kinds of partially ordered arrays (less than lg(N!) comparisons needed, andas few as N-1), yet as fast as Python’s previous highly tuned samplesort hybrid on random arrays.

and for us Java nerds it will be replacing java collections sort() as of Java 7

Anyway its sexy as hell check it out:

Read more

How we Teach Introductory Computer Science is Wrong:Rebuttal

The original 1985 Sweller and Cooper paper on worked examples had five studies with similar set-ups. There are two groups of students, each of which is shown two worked-out algebra problems. Our experimental group then gets eight more algebra problems, completely worked out. Our control group solves those eight more problems. As you might imagine, the control group takes five times as long to complete the eight problems than the experiment group takes to simply read them. Both groups then get new problems to solve. The experimental group solves the problems in half the time and with fewer errors than the control group. Not problem-solving leads to better problem-solving skills than those doing problem-solving. That’s when Educational Psychologists began to question the idea that we should best teach problem-solving by having students solve problems.

How we Teach Introductory Computer Science is Wrong

This paper then goes on to explain how programmers should now be taught like this.
This would seem to suggest the the best programming course would be a slide show of common programming problems and example code that solves them. I however have a problem with this, I think this method of teaching will turn out really really good code monkeys, NOT programmers. I would like to refer back to the FizzBuzz test for those of have not heard me personally rant about it :-

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes. Want to know something scary? The majority of comp sci graduates can’t. I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.

Why Can’t Programmers.. Program?

I think this is the problem. In that first study where students where shown how to work out algerba problems, they will now work out those problems much faster than any other student. However I believe if they are shown problems that varry from these even by a small margin they will begin to fail very fast. This I believe is what the FizzBuzz test begins to pick up on, we as undergrads are never really show a problem that looks like the FizzBuzz, so only those programmers who know how to solve orginal problems can do it.

I believe they are really 3 kinds of Porgrammer:

  • Code Monekys: adapt solutions they have seen before, to simler problems.
  • Programmers: Can create new solutions to problems within the logical structure they understand.
  • Hackers(Uber Programmers): Can adapt the solutions within logical stucture they understand to solve problems in unexpected ways.

Taking the approve mnaterial into account I believe you train :

  • Code Monkeys: by showing them code, and they will then be able to stick together that code with other bits and solve simple problems.
  • Programmers: by making them solve problems them self’s, and even better get them to work on private projects, even if they are crazy and will never work they will learn skills in doing them.
  • Hackers(Uber Programmers): im not sure how you train these, other than sets them problems they should noty be able to be solved, and see what happens

There is room in the world for all of these kinds of programmers be be aware, of which type you are creating becase Code Monkeys can solve alot of problems but to produce somthing great you nearly always need a Programmer or a Hacker.

WeakLink password breaking

I have been thinking about password security quite alot today, and it occurred to me that most people including myself use universal passwords. We know its wrong, but we almost can’t help our selfs, its to human.

The thing we fail to think about enough is that the more we give this password out the more we danger put it in, every website that has it in there database is a new link in the chain protecting our password. All that is required to get your password thus is to break into the weakest website that has it, all you need is one admin who is not hashing his passwords, or failing to salt them.

Thus when personally attacking someone’s electronic life, if they have a universal password all you need to do is attack the weakest website or system that has it, and for most people there could be upto 50 -100 systems. One of these is bound to have a security hole.

The second possible mode of attack is to setup a dummy website, and get the person is question to sign-up to it. All that needs to be done then, is read it straight out of a database.

In this form you can ask for all the other details you want,

  • Name
  • Address
  • Email
  • Username

You could even ask for information such as a mother maiden name, with all this you could get access to email accounts, facebook accounts, webspace, e-commerce account(Amazon,Email,Play.com). With addation information gained from these you then even access bank accounts.


Message of the Day
DONT BE HUMAN –> DONT USE A UNIVERSAL PASSWORD

GPU Programming thoughts

As I discussed in All for One, and One for All{} I am becoming increasingly obsessed with GPU’s I do truely believe they hold the greatest untapped power in most modern computers. No only this as we are seeing a rising trend in multi-core CPU’s, these ideas may become more relevant to CPU programming.

Further thus to my Idea of the ‘For all’ loop I considers some other ways in which native parallel computing could be put to use in a 3rd-level Language.

  • For All Loop
    A alternative to the ‘For Each’ where work could be sent to a GPU with multiple cores and each core performed one or more of the iterations.
  • Distribute Statement
    An alternative to the ‘Switch’ statement where you expect 2 or more cases to be true, thus the code for each case when resolved to true would be run on one of the GPU’s cores.
  • Independent method
    This would be work like a static method, but would be completely independent of the class(no reference to non final static variable’s).This would allow the safe running of this method, on multiple core in parallel. This could be very useful in recursive algorithums were the methods make multiple calls to its self. eg in a quick sort
    return quickSort(left[]) + pivit +quickSort(right[]);

    there is no reason why the left and right quicksorts could not be run in parallel.

These ideas still contain many problems eg. an Independent method, containing a forall loop, containing a distribute statement. Not to mention how to keep it all ‘thread safe’. However I have ideas on how these problems may one day be solved, it is my intention to use these as the basis of my dissertation, with a view to possibly even designing a language witch implements some if not all of them.

All for One, and One for All{}

A topic that has be plaguing my brain for some weeks now is GPU’s I have come to believe they are a very good source of untapped power inside modern computers, and not simply for graphics.

This gist of a modern day GPU which are a specialized processor. Their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms, particularity that of graphics programming algorithms.

However the range of the algorithms they are good for is not limited only to graphics, their simple highly parallel structure, with multiple cores means they extremely well suited to performing simple operations on very large amounts of data in no particular order.

for example We have an array
int[] a;
of 100 items.

if we assume the i++ operation takes one second, and all other operations take so little time as to be of no consideration.

A normal for each loop

for (int i: a)
{
i++;
}

would take 100 seconds

However if this work could be sent to a GPU with 100 cores and each performed one iteration, it would still use 100 CPU styles but they would be in parallel, and thus the execution of the loop would only take 1 second. I dub this idea the for all loop


for(int i :: a)
{
i++
}

This will allow programmers to deicide between for each and for all loops. The former still being necessary for when the items in the array need to be processed singularly/ in order.

Nivida already have CUDA programming language for the geForce GPU’s , it is my belief that should Sun or another big company chose to implement this idea it would be feasible to make nearly all GPU’s available to a interpreters of many languages.

The Modern Switch Statment

switch (someInt) {
case 1:
case 2: System.out.println(“Forgot a break, idiot!”);
case 3: System.out.println(“Now you’re doing the wrong thing and maybe need hours to find the missing break muahahahaha”);
break;
default: System.out.println(“This should never happen -,-“);
}

I was think about this and its stupid, because nearly every time I write a switch statement I want to break; not wanting to is more of a freak occasion that the norm.so it would seem to me that sane thing to do is change so that instead of being break; driven they would instead be continue driven. This would solve alot of problems and many a programming bug has been caused my a missing break;

such that the new switch statment would look something like

switch (someInt) {
case 1:System.out.println(“Just minding my own business”);
case 2: System.out.println(“I nicely execute only on a 2”); continue;
case 3: System.out.println(“I execute on a 2 or a 3”);
default: System.out.println(“This should never happen -,-“);
}

Object Happiness

Iv just been making my way though The archives  of  Codding Horror by Jeff Atwood as I often do. Its one of those good places where you can learn how to program, and think about why your writing that code that way, rather than just how to write java/C/Php…..

In one of his articles he mentions a concept called object happiness. I would like to take this opportunity to say this;

Hi my name is Gwilym and im Object Happy.

Worse I am design happy, the first thing I do when I get my greasy little fingers on a new coding project is this, I design the hell out of it. Im always leaning about new ways of coding and there will be a dozen things I want to try out. I will stand in front of my white board and be thinking “yes, yes we can have a singleton here, ar yes my precious oh and we can abstract this out to here, oh my and we can can use an interface to link these together”. The result of this is that I waste coding time because, the design will now be so reusable, robust and readable that it will need about an extra 50% more code than if I had sat down and just bashed the thing out. I know this because I have seen may other peoples solutions to the same problems were they have just coded it, any they nearly all work close to or just aswell as mine, we both get 1st’s and all I get for the extra week of coding I put in is a nice comment from the Lecturer who marked it.

Rimmer: Look, I’ve got my engineering re-sit on Monday; I don’t know anything. Where’s my revision timetable?
Lister: Wait, is this the thing in a- in all different colours, with all the subjects divided into study periods and rest periods and self-testin’ times?
Rimmer: It took me seven weeks to make it. I’ve got to cram my whole revision into one night.
Lister: Hang on, this the thing with a note on it, in red, said, “Vital, valuable, urgent! Do not touch on pain of death!”?
Rimmer: Yes!
Lister: I threw it away.

Red Dwarf

This has no good points-
Make sure you don’t spend stupid amounts of time on your design, and leave you self no time to implement
Make sure you design is worth something, otherwise you work will just be thrown away.

Having thought about this I would say this-

Find a percentage call it x make it sane about 10-20% when you design something make sure that all the exciting things you thought of, never add more than this the total size/ workload of your project.

For anything that you could code in a day , design it as you go along or you will be there for a week.

If you have to design it badly, code it, hand it in. Then if you really want to you can always go back and have quite little refactor by your self until your soul is once again happy.

But most of all remember as I try to we are programmer not designers so CODE,  design is something to help you do that, but it overall less inportant than code

CUDA – The future of high performance computing?

CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).

GPUs are massively multi-threaded many-core-chips.GPU threads are extremely lightweight, thus thread creation and context switching are essentially free. As they have this highly parallel structure, they are more effective than general-purpose CPUs for a range of complex algorithms.

CUDA is formed of a small set of extensions to the C programming language, which allows the power of GPU’s to be harnessed for task normal performed by the CPU.

This can speed up algorithms which require large amounts of computation, especially those which can make use of parallel computation, such as AI problems, Dense linear algebra, Sequencing (virus scanning, genomics), sorting and the list goes on.

Many company’s and research project already make use of large GPU clusters, for projects, I believe the imitate future of high performance computing. and with more and more powerful GPUs being shipped in home computers many modern programs may be able to take advantage of large amounts of parallel computing power to produce optimal/close to optimal solutions to NP problems, as a matter of course.