Thursday, November 18, 2010

Podcasting

I've begun podcasting!  You can find my shows over at Hacker Public Radio, or, more specifically, you can find them at my correspondent profile page.


What is Hacker Public Radio?

Hacker Public Radio (HPR) is an Internet Radio show (podcast) that releases shows every weekday Monday through Friday. HPR has a long lineage going back to Radio FreeK America, Binary Revolution Radio & Infonomicon, and it is a direct continuation of Twatech radio. Please listen to StankDawg's "Introduction to HPR" for more information.

What differentiates HPR from other podcasts is that the shows are produced by the community - fellow listeners like you!. There is no restrictions on how long the show can be, nor on the topic you can cover as long as they "are of interest to hackers".


Also, please note that Hacker Public Radio needs more contributors!  If you have anything (anything!) that you'd like to record a show on that you think the hacking community would be interested in, please consider recording a show, as we are desperately in need of more shows.  It's a great way to get your feet wet with podcasting, and benefits a great project.  You can find more information on the "Contribute" page.


Let me know if you do contribute something!

Aquarium/Fishcam Update

I realize the fishcam has been down for quite some time.  I apologize for this, and have some explaining to do.

About a month ago I was getting read to turn the fishcam back on, but Albert started acting sick, and I didn't want video of a suffering fish broadcasting to the internet all day.  I figured I'd turn the cam back on after he got better.

He appeared to be suffering from mouth fungus, but I'm not entirely sure, as he never had more than one white tuft at a time.  His biggest symptom was lethargy, and lack of appetite.  During this time, Matcha seemed to be having no problems.

Albert started to get better, until one night the symptoms came back.  On the morning of the 12th, Albert died.  I cleaned the tank, medicated once more, to try to keep Matcha from getting sick, and hoped for the best.


After this, Matcha seemed to lose his appetite as well, and quickly got sick, with symptoms much worse than Albert's, and a faster onset.  Yesterday he too died.

Looking back on it, I know there were some things I should have done a little better, like medicating earlier, but there's no use in that sort of thinking.  For now, I have no fish. This will be the case until the spring, when I will try again, hoping for the best.

Until then, needless to say, the fishcam will continue to be down.  An announcement will be made when it is back up. 

Tuesday, October 26, 2010

Hackers for Charity

After seeing the latest post on the Hacker's for Charity site, I decided I should write a quick post for my readers that don't know about this awesome organization so they too could lend their thoughts and prayers to Johnny Long and his family as they're currently going through some very rough times, both as a family and as an organization.

From the site:

What is HFC?

Picking on charities is just plain rude. Thankfully, that’'s not what we’'re about. We'’re about proving that hackers have amazing skills that can transform charitable organizations. We'’re about stepping into the gap to feed and educate the world’'s most vulnerable citizens. We are virtual, geographically diverse and different. We are Hackers for Charity.
So what do we do?

  • We feed children through our  "food for work" program.
  • We build computer labs to help students learn skills and land jobs that are key to disrupting poverty's vicious cycle. 
  • We provide technical assistance to charities that can not afford IT services.
  • We provide job experience and references to our volunteers.
Our largest project is headed by Johnny Long in East Africa. In June 2009, he and his family relocated to Uganda to focus on HFC full-time. Read more about their journey here or fund their volunteer work here. 

Any help that could be sent their way would be much appreciated by all involved in the project. It really is a wonderful thing they're doing, and they have great t-shirts as well.

Wednesday, October 13, 2010

Phreaknic 14

I'll be spending this weekend in Nashville at Phreaknic 14. I went to Phreaknic last year, and loved the experience. That was not only my first time at Phreaknic, but my first time at a hacker con.

Phreaknic is a "conference for the curious mind" as this year's flyer says. The conference features speakers from all over who talk on everything from roasting coffee beans to building a home nuclear reactor to penetration testing and many other interesting things.

If anyone is going to be in the Nashville area this weekend, you should stop by and check out the conference.

From the website:
PhreakNIC is an annual convention held in Nashville, TN. Originally started as a "hacker convention," it has since grown to include all things of interest to the technology minded individual, such as sci-fi/fantasy, gaming, anime and other areas of tech culture. PhreakNIC is organized by the Nashville 2600 Organization and the Nashville Linux Users Group. There are technical presentations, cultural exhibits and panels, as well as plenty of socializing. 

You can find out more about Phreaknic at the conference website.

Thursday, September 30, 2010

Programming Competition

Monday night I participated in my first ever programming competition.  It was an interesting experience which I thoroughly enjoyed.  The competition was held in the Computer Science department, and was open to all students of the University.

The goal of the competition was to find people to go on to the regional competition, which will be either at Murray State University or at Louisiana State University (really want to say something here, but I'll refrain...).  The reason for the two locations is that it would depend on what date the team decides to go.

In the competition we were given four problems to solve, and we were told we could access the official documentation for Java, C, and C++ (the three permitted languages), as well as make use of any printed materials we brought with us.  Out of the four problems, I was only able to solve one during the competition.  Towards the end I sort of stopped trying, since I knew I wouldn't get the others done in time, so I had a little fun with them and tried different things out.

Out of the eight people who competed, I came in sixth, which was fourth out of the people who are eligible to go on to regional.  Since a team consists of three people, this leaves me as the first alternate, and I've been told by the coach (who is one of my teachers and a friend), James Church, that I'd go with the team to the competition whether or not I competed myself.  I'm all for this.

Considering this was my first time doing a competition, I'm happy with my one correct answer, and with just not coming in last.  One thing I learned during the course of the competition, and while looking at Church's solutions afterwards, is that I need to learn a lot more about the classes Java provides for me, as there were a few I didn't know about that would have allowed me to solve another problem or two very easily compared to how I was trying to do things.

This competition is another event that has inspired me to learn more about programming and to improve what I know by practicing.  In my free time, I plan on attempting more problem solving exercises like those given in the competition.

Jeff Atwood (@CodingHorror on twitter) retweeted something this morning that was right along these lines.  The post was from @enmerinc and said "How to become a better developer: 1) Go to #StackOverflow 2) Pick a question outside of your comfort zone 3) Open your IDE and solve it"

I really liked that idea, and plan on doing that from here on out.  I don't know that I'll even average one problem a week, but even so, I'll learn something.

Wednesday, September 29, 2010

A Desktop Version of a Classic Supercomputer?!?!?!

Normally I'd relegate posts  consisting of ranting about something I found on the internet to my tumblr account, but this was too good to post on a blog that no one reads.

*cough*NotThatThisIsMuchBetter*cough*

Excuse me... Anyways, while reading through posts in Google Reader, I saw what may be my favorite thing featured on Hack A Day to date.  Since this is coming via Hack A Day, many of you probably saw it already, but for those that haven't I present

A TINY CRAY-1:


That's right, Chris Fenton has created what amounts to a desktop-sized version of this classic supercomputing marvel.  It's 1/10 the size of the original, and sports an impressive 33MHz processor (the original only had 80MHz, so no scoffing!).  It's not the most useful of devices, but out of all the things I've ever said "I'd love to have one of those on my desk so people would ask what it is," this is probably the coolest.

It will be a while before I undertake this project, if I ever do, but it would definitely be worth it in my opinion. I mean, who hasn't wanted their own personal Cray? I know when I first heard about Cray supercomputers I immediately went to find out how much they cost so I could plan on buying one when I grew up and became rich (still waiting for this).

You can find more pictures, specs, and code (yes, code!) over at Chris Fenton's site.  Also, you can check out the Hack A Day article for some discussion in the comments.

Thursday, September 23, 2010

Random Stuff

I've been meaning to write a blog post for the past few days, keep people interested, but it's been a busy week.

So, I told myself, "Self, today is the day, you're writing a blog post."

It's been a long day though, and I've got nothing. So, here's my favorite picture of a wombat:


Feel free to share your favorite pictures of wombats in the comments.

P.S. I have a [real] post for tomorrow.

P.P.S. It's Miranda's birthday, so drop her a tweet and check out what she has going on over at Tidbits For Your Wits and Gamespace.

Thursday, September 16, 2010

Password Length and Complexity

I subscribe to quite a few mailing lists, for various topics and reasons.  One in particular that catches my attention more often than others is the Security Basics list.  This is one of several Security Focus lists that I subscribe to.  If you're looking for some lists, this is a good place to start.

One topic in particular that caught my eye today was a question titled simply: "Length vs Complexity."  Being a fan of looking at the complexity of things, I decided to see what this was all about.

Users hear constantly that they should add complexity to their passwords, but from the math of it doesn't length beat complexity (assuming they don't just choose a long word)? This is not to suggest they should not use special characters, but simply that something like Security.Basics.List would provide better security than D*3ft!7z. Is that correct?

Responses ranged from people asserting that increasing the length was better than increasing the keyspace (number of possible characters) to other people saying just the opposite.

The main criticism towards longer passwords that were easy to remember (multiple words, instead of random characters) was that it is still subject to dictionary attacks.  Granted, it is more difficult to pull off a dictionary attack when it isn't just one word, as you do have to try more possibilities, or possibly have some foreknowledge of the type of password used.  People saying this approach was not better were leaning towards saying that a true brute force approach would most likely not be the first thing tried.

Conversely, people saying that the "D*3ft!7z" example was weaker used the argument that it had fewer bits than a longer password with the same keyspace.

I fall in line with the first group.  The problem with using bit strength as your metric for password security is that saying a password has a certain amount of bit entropy means you're assuming that each character was randomly generated separate from the other characters.  Without making this assumption, you cannot say that a longer password always has more bits of entropy than a shorter one.  This is what entropy means.

A few months ago, I wrote a short script in Java to easily do the math on password bit strength.  The aforementioned problem is quickly evident when you run the program:


The program could be made more complex. Tests could be added checking to see if all possible types of characters for the given keyspace (numbers and letters; numbers, letters, and special characters; etc.) are actually used.  In most cases, this would give a more true value of entropy in the end.  Notice I said "in most cases," as this still would not completely solve the problem.

For example, say we have two passwords, "AAAA" and "AFIS" used by two members of Example.com.  Example.com only allows users to use capital letters in their passwords, disallowing numbers and special characters.  Now, a dictionary attack on these passwords may return no results, but when brute forcing the passwords, it is plainly evident that AAAA is going to be found much faster than AFIS.

Both passwords technically have the same amount of bits, and both passwords technically used all the character types available, but because AFIS was generated randomly, it has more entropy than AAAA.

Testing for randomness is quite a bit more difficult, and I'm not entirely sure you can completely say something is truly random.  Though, by doing the best we can, we will, without a doubt, have much stronger passwords.

Nowadays, with the use of GPU's for password cracking, Rainbow Tables, massive botnets, and other forms of computation available to those who would seek to crack people's passwords, it is nearly impossible to say a password really is "secure."  All we can hope is that it is "secure enough."  The password system is inherently broken, but it will be a long time coming, if it ever does, before we see another system completely replace it that does not suffer from it's own shortcomings.

You can find the thread from the mailing list here.

ShoeCon Reminder

This Saturday, the 18th, is the date of ShoeCon 2010, in Atlanta.  I'd love to make it, but will not be able to due to having family in town. If any of you will be in the Atlanta area, you really should try to go.  The event is a conference being held to celebrate the life of Matthew Shoemaker, a friend to many in the InfoSec community.    Any proceeds from the conference go in a fund to help care for Matthew's children.  Please keep his family in your thoughts and prayers.

More on ShoeCon over at ShoeCon.org.

Wednesday, September 15, 2010

Day Off From Social Media

One thing I've heard many times the past couple of years is that people are much more connected than they have ever been before.  We have cell phones, text messaging, e-mail, instant messaging, Twitter, Facebook, etc.  The list of ways we're constantly connected to other people is nearly endless, and new services pop up all the time to make this connectivity even easier.  Having a smart phone means you are rarely without the internet at your fingertips, meaning you have access to all this connectivity more often than just a few years ago, when smartphones were rarities.  With these services, it is amazing how much communication a person can manage in a day!

I can't find a link right now, but I remember seeing a study showing that the number of people we can actively keep in our mental social circle is fairly small, less than 200 if I remember correctly.  The study showed that the use of Facebook and other social sites these days has actually lead to an increase in this number.  This is amazing, as with around 600 people connected to me on Facebook, and another 200 or so on Twitter, as well as elsewhere, it would be nearly impossible to keep track of this many people without these services.  Granted, not all of those people are very active, and I don't keep track of all of them to much extent, but the fact that I could if I chose to is dumbfounding.

Something else I hear said these days is that people may be suffering from what can be called "information overload" due to all of this connectivity.  I can understand this, as there are plenty of days where I feel I just can't keep up with everything that is going on around me and in my online social circle.  The amount of information is staggering, especially on Twitter, where I attempt to read everything posted by those I'm following.

Lately I've begun playing Empire Avenue, which has been both a blessing and a curse.  It has connected me to so many more people that I otherwise probably would not have found, increasing readership of this personal blog, as well as gaining quite a few new followers for me on Twitter.  I've even signed up for Flickr as a result of joining the site, so now I'm becoming active in a small photography community.  All of that is very good, and I enjoy it.  The problem is that due to all of this social growth, the amount of information and contact I receive every day has gone up dramatically, and those days I feel I can't keep up with everyone have happened more often.

Yesterday, I decided I was going to just take the day off.  I don't have a job, other than being a full time student, so social media feels like a part-time job to me sometimes.  I felt that I needed a day off.  A few people contacted me when I didn't post my usual round of "good mornings" around the web, asking if I had actually gotten out of bed, but for the most part, I didn't have very many people contact me.

The peace and quiet of the day was refreshing.  The last time I spent any length taking a break from the internet was when I spent a week in Costa Rica on a mission trip.  Having been online for a good chunk of my life, leaving the internet is an interesting feeling for me, as the day feels as if it slows down when I'm disconnected.  I spent a good deal of time yesterday reading and writing, things I haven't been doing as much over the past few months.  Time was spent sitting and thinking, something I don't remember doing in quite some time.

In all, I enjoyed my day off, and plan on having more in the future.  If it's been a while since the last time you "disconnected," then I suggest you give it a try.  Who knows, you might find that you like it.

Sunday, September 5, 2010

Code Complexity Classes II: Revisited

Here's that C code I've been promising:


Compiled in 64-bit Ubuntu 10.4.

When you run the code, just as in Java, you can see the difference in time between the "row by row" and "column by column" implementations:

Photobucket

The row by row implementation ran in 2.64 seconds according to the timer built into the program, with the column by column example running in 3.67 seconds.

This is in a VM, so comparisons to the Java example done before should not be made, as those examples were tested in the host OS.  As always with things like this, the time is subject to your unique system and what is running at the time of execution.

What is important though, is to notice the difference in time between the two implementations, it is still there, and it is a noticeable amount, even with this relatively small amount of data.

Just for curiosity's sake, I decided to see how turning optimization flags on would affect the execution time:


With first level optimization, our row by row time dropped considerably to 2.32 seconds, and our column by column implementation time went up to 3.88 seconds, further increasing the gap.

With second level optimization, row by row dropped again to 2.27, and column by column further increased to 3.95. As you can see, this time the change was negligible.

Photobucket


With third level optimization, we saw a decrease about half that of first level optimization in the time for row by row copying to 2.16. This optimization also decreased the column by column time to below that of first level optimization, but higher than that of no optimization, putting it at 3.81 seconds.

I'm not going to go into what exactly these optimization flags do, as that's a topic for another post, and I'm not even sure what all is done myself at this point. I'm extremely new to C, so if you see anything incorrect in this post, feel free to point out my mistakes, and I'll be happy to correct them and give credit where credit is due.

Speaking of where credit is due, I must give some to my Assembly Language teacher, Dr. Chen, for providing the basic version of this code (I only added the timer and made some minor tweaks to suit my cosmetic style) and for providing the inspiration for these posts on complexity classes and how they stack up in the real world.  Perhaps in the future I'll revisit this topic more in depth, for now though, I am done with it.

Friday, August 27, 2010

The Java Heap

Note: Today's post is somewhat of a continuation of yesterday's. Just in case you didn't read yesterday's post, or would like to review, you can find it here.

While writing the code for yesterday's blog post, I ran into a runtime error that I hadn't personally run into before, though it is by no means an uncommon error.  The only reason I haven't run into it before is because I haven't before written programs in Java that had significant memory requirements.

Let's look at a section of code from yesterday:
static int size = 2048;
 
 static int A[][] = new int[size][size];
 static int B[][] = new int[size][size];
Find the complete code here

This section is creating and instantiating the int variable "size" then allocating memory for two two-dimensional arrays that contain size2 elements.  Let's try adding another 1024 to the size variable and see what happens.

Click for full size

Now that's interesting.  The part we're interested in is:
java.lang.OutOfMemoryError: Java heap space at ArrayExample.

Apparently when allocating memory for the second array, we run into a problem, as there is not enough memory.  At first glance, this seems strange, as I have 8 gigabytes of ram in this machine, and even though Java can't use all of that because I'm running 32-bit Java, I did not think it would be a problem as the maximum memory still should not have come anywhere near this.

I suppose the question now is how much memory did the example attempt to use?  Just looking at the size of the arrays, and not taking into account overhead, the arrays come out to just over 73mb of space.  Considering 32-bit allows for ~4gb of space, this still seems odd that we ran into a problem.  In order to explain this, we have to focus on another word in the error message:

java.lang.OutOfMemoryError: Java heap space at ArrayExample.

What is the "heap space"?

According to JavaWorld:
The JVM's heap stores all objects created by an executing Java program.

What is the size of the heap?

By default, the size of the heap in Java 1.6 is 64mb.

Now we can see that we were, in fact, trying to use more memory than we had available.  I have a problem with this though. I want to use more memory than 64mb.  Why?  Honestly, because I can.

Is this even possible, or are we stuck with this limit?

Luckily for us, Java allows for us to bypass this limit at runtime. By executing the java command with the argument -Xmx and choosing a new maximum heap size, we can set a larger heap size for the program's use.

Example:

     java -Xmx128m ArrayExample


The number in the example is our desired maximum heap size.  The "m" after the number designates that we mean megabytes, you can also use "g" for gigabytes, etc.

Let's try it, using the same program as before:

Click for full size

Ah, now it runs correctly, and we can run our test from yesterday with even larger numbers, allowing us to see the difference between the two array copying implementations more clearly.  You can go even larger than I did here.

Just as an example, here's a run with arrays containing 102402  (5x as many as yesterday's) elements:

Click for full size

Now we're getting somewhere.

Another problem you may run into is trying to allocate more memory than you have physically in your machine, or just more than allowed.  Trying to set the maximum heap to 4g gives an error that it is an invalid maximum heap size.  Yet another error you may run into is the JVM not being able to find enough contiguous memory.  The heap only allows you to use memory that is in one big block, so just because you have two gigabytes unused on your machine doesn't mean you'll be able to use it (trust me, I tried).

Have some fun with it, see what you can do.  I'd love to hear what the maximum size people can get to work on their machines is.

C code still to come, and I'll try to have some other examples besides just this array copying scenario.

Here's some links to more information on the heap and other options:

Thursday, August 26, 2010

Code Complexity Classes: Math vs. Reality

Had an enjoyable first day of my computer science classes yesterday.  In my Computer Organization and Assembly Language class, the teacher went over some examples of different C code, then we'd look at the Assembly for the code to see what was really going on.  One example he showed used large arrays.

In the example, the scenario he gave was that students are given a large array, and they need to write code to copy the array to another array.  In this scenario, two students have very similar implementations, but with one key difference.

Note: I redid the code in Java so I could add a few tweaks, C and other languages to come later.

Code for the first student:
static void copyij(){
  for( int i = 0; i < size; i++){
   for( int j = 0; j < size; j++){
    B[i][j] = A[i][j];
   }
  }
 }
Code for the second student:
static void copyji(){
  for( int j = 0; j < size; j++){
   for( int i = 0; i < size; i++){
    B[i][j] = A[i][j];
   }
  }
 }
Just in case you missed it, or didn't feel like looking at the code, the difference is in the order they copied the elements of the array. Both students' code will compile and run without error. The first student copied all elements row by row, as opposed to the second student who went column by column.

Looking just at the complexity class of each piece of code, we see that both have a complexity of O(n2). Since this is often how we measure the efficiency of code, most people wouldn't look into it any further. The real world doesn't always work out so perfectly though. Running both segments of code and logging the time of each copy shows that the second student's code runs significantly slower than the first.

How much slower? Over 10 runs, the first student's code averaged 16ms, whereas the second student's code averaged 71ms. On my netbook, the difference was even more profound, with the first code being, on average, ~10 times faster than the second.


These times are in milliseconds though, and are so small as to be negligible. Without running benchmarks like this, you wouldn't even notice this in your program. What happens when your array has 10 times the amount of data as the one used here? Well, wait until tomorrow where we cover the same topic using C code and you can find out.

Edit: forgot to add the complete code so you can run it for yourself.  The complete code can be found here.  Happy coding!

Friday, August 13, 2010

In Memory of Matthew Shoemaker

Matthew Shoemaker
(1973 - 2010)

On Friday, July 30, Matthew Shoemaker passed away.  Matthew was a host and co-founder of one of my favorite podcasts, Infosec Daily.  Even though I never got the opportunity to meet him in person, I feel that through listening to the podcast each night, I did get to know him, and I will miss hearing his insight into the world of information security.

The ISD podcast has a page set up in memory of Matthew, which you can find here.  The page links as well to a paypal account for donations to the Matthew Shoemaker Memorial Fund, which has been set up to help provide for his two sons.  There is also a memorial episode of the podcast (episode 185), in which many friends of Matthew who had spoken on the podcast in the past gathered to reminisce and pay tribute to their friend.

A conference in his memory is also in the works. Called ShoeCon, it will be held September 18th in Atlanta.  Proceeds from the con will go to the Matthew Shoemaker Memorial Fund. More information can be found at the Infosec Daily website.

Please keep Matthew's family in your thoughts and prayers.

Google's Data Liberation Front

With all the news lately about Google's apparent deal with Verizon and what it may mean for net neutrality, it can be hard to remember Google's informal motto of "Don't Be Evil," or just hard to believe that it actually means something.  There have been issues with Google not always following through with this motto to the satisfaction of many in the past, but that is beside the point.  No company is perfect, and I'm not here to defend Google on other issues or to criticize them, all I want to do is praise a team in Google that I see to be doing something good for the internet.


In 2007, an internal engineering team at Google named itself "the Data Liberation Front."  Their goal is to make it easier for users to retain control of their data.  They do this by creating ways for users to easily backup their data from Google services and also to remove it from these services.

The argument for users retaining control over their own data is one that has been a rather large issue over the past few years.  Groups advocating privacy and internet user rights have complained against Facebook and other companies for their stances on data ownership or users not being able to satisfactorily remove their data from the offending company's servers.

Even Google has been in the cross-hairs before on the amount personal data they collect from users, store, and use for advertising purposes. The Data Liberation Front is a refreshing change from all of this though, on their homepage, you can find their mission statement easily visible in large red letters:

Users should be able to control the data they store 
in any of Google's products.  Our team's goal is to 
make it easier to move data in and out.

They admit that they haven't perfected this on all the Google services, but they are working on it.  On the site you can also find a list of products they have already worked on allowing you to liberate your data.  

I'm a big fan of what this team is doing, and though I realize they have been around for a few years and this is in no way new news, I wanted to mention them here in order to spread the word.


Also, everyone likes stickers (at least, I know I do, my laptops are covered in them).  The Data Liberation project currently has a way you can show your support for them, and that is by proudly displaying Stickers from Data Liberation Farms (which you can get for free!).  I received both of mine in the mail today:


As you can see, it also came with a surprise, a small sticker of the Data Liberation Front logo, which I've already put to work on my netbook:


You can find out more information about the Data Liberation Front on their website at http://www.dataliberation.org/.  Additionally, you can follow along with their mission at the Data Liberation Blog or on Twitter at @dataliberation.


Monday, August 9, 2010

Big Development in Computer Science

Since I claim that this blog has computer science as one of the topics I discuss here, I feel I'd be remiss to not discuss a very recent and interesting development in the computer science world.

The are several problems in computer science that people are trying to solve all the time, many of these problems tie into mathematics or other areas.  Wikipedia has a good list of currently unsolved problems, with links to descriptions and more information for those interested.

The problem of interest to this blog post is the first such problem mentioned on that page: P = NP?

AOL News has an article that I feel explains the problem much better than the Wikipedia page does for people who are not as fluent in their math as I'd like to be (you can find the complete article here).  Quoted from said article:

Roughly, the mathematical problem asks if "questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure," according to the Clay Mathematics Institute.

This may not adequately describe the problem to you, but if you're interested in more information on the problem itself, I direct you again to the article and also to the Wikipedia page on P versus NP.

My reason for not explaining the problem better here is twofold.  First, I do not believe I can explain it better than those sites do. Second, my purpose for writing this is not to explain what the problem is, but to share a bit of news on the problem.

Now that we have that out of the way, on to the news!

On August 6th, e-mails were sent to several researchers from Dr. Vinay Deolalikar.  Dr. Deolalikar is the Principle Research Scientist at HP Labs.  In this e-mail, he explained to his fellow researchers that he has found a solution to this problem, and also shared with them his findings in a paper.

This paper was leaked onto the web, in pdf form (can be found here).  The 66 page document has been shared across the internet by now, but is not the end of the story.  Today, Dr. Deolalikar released a longer, updated form of the paper (102 pages, here).  He has also stated that the final form of the paper is still under work, and will be released shortly.

This may not sound like such a big deal to some of you, but I assure you, this may be the most important computer science problem currently, and if this answers the problem, it is an amazing piece of research.  The P vs. NP problem is one of the Millennium Problems and solving it has a prize of $1,000,000.  Yes, one million dollars to whoever solves this problem, or any of the other Millennium Problems, it is that big of a find.

Currently it can be assumed that hundreds, if not thousands, of mathematicians and computer scientists, professional and armchair alike, are poring over the paper looking to see if there are any issues with it and to determine if it is correct.  I for one find it way over my head, as I had to pull out a dictionary during the first paragraph of the abstract, but I still find it exciting that a solution may have been found.

Links

HP Labs: Vinay Deolalikar
AOL News: P=NP=WTF?: A Short Guide to Understanding Vinay Deolalikar's Mathematical Breakthrough
Greg and Kat's Blog: P ≠ NP
The P Versus NP Page
Wikipedia: P and NP
The Millennium Prize Problems

The Early, Leaked Version of the Paper (66 pages)
Updated Version of Paper (102 pages)

Aquarium Update

For the past month or so I've been having a small problem in my aquarium.  All my plants have been dying.  I had about 12 small plants in the aquarium, not I'm down to one healthy plant and one that is on it's way out.

I have clay mixed in with the rocks to provide essential minerals to the plants, and I've even bought some liquid plant fertilizer for use in aquariums, nothing seems to be helping.  The only thing I can think of is that the water is colder than what the temperature should be for these plants.

If that is the case, the only choice I have to save the plants would be to add a heater to the tank.  I do have a heater, but with goldfish being cold water fish, this is not ideal.  Right now I just plan on buying new plants when I have the time and money to do so, as they weren't completely necessary for my water quality anyways.  My aquarium just looks a little bare now without them.


In other aquarium related news, the fishcam has been down for quite a while now.  The reason for this is that I've been using the laptop that normally just hosts the feed for some other work, requiring it to be away from the aquarium.  I should have the camera back up and operational very soon.

Friday, August 6, 2010

Supercomputing and Some Updates

Supercomputing, Cluster Computing, and Grid Computing. These are all topics I have been interested in for quite some time, though I've never done any actual work on the topics, just some casual browsing for information and ogling pictures of different setups.

Lately I've had my interest in these topics rekindled, and now I actually have the knowledge and means to play around with all of them.  Granted, I don't have the means ($$$) to play in these areas to the extent I would like to, but I can dabble nonetheless.

Over the next few weeks and months I'll be doing some small scale experiments with different High Performance Computing (HPC) models.  Expect to see a few write-ups and updates here as I do this.

Now, onto some updates.  

I've been taking classes for most of the summer, and when not in class, I've been enjoying my time off.  Due to this, I've been extremely busy, and haven't had time to update this blog as much as I wanted to.  I don't want to be one of those bloggers who apologizes constantly for not updating, so I'm not going to apologize.  If you want me to update more often, bug me about it, otherwise, deal with it.

Another area I've been spending some time playing around in is different programming languages.  Back in the day (I love sounding old), I dabbled in PHP in addition to writing HTML and CSS by hand.  It's been a couple years, and lots of things have changed in all these areas.  These past few weeks I've been doing some basic work in more languages than just Java (what my classes have been on during the past year).  I believe I'm falling in love with programming in Scala, so expect me to post some tidbits (you'll see this word again in a minute!) on these different languages as I work my way through learning them.

In other news, my friend, Miranda, over at Tidbits for Your Wits (She changes this name way too often, so if the link breaks, sorry.  She's promised not to change it again though.) has moved to an updating everyday system of posting.  Not to be left in the dust, I'm going to be moving to a posting at least once a week system, as opposed to my current "post when I feel like it" system of blogging.

I've also added a new section to the sidebar, called "Currently Reading."  This new feature is going to show to everyone (you guessed it) what I'm currently reading.  I'll try to update it as soon as I move on to a new book (or books), but it is bound to fall behind from time to time.  At the end of each book, I'll think about posting a review, if I feel like it.

That's about it, look forward to you all reading my stuff in the near future.

Monday, July 19, 2010

Possibly the Most Epic/Crazy/Awesome/Bad Spam Message Ever

Received this e-mail today.  As soon as I read it, I knew it had to be shared with everyone. Enjoy :)

By the way, my favorite part is how "Mr. Page" apparently refers to himself.

If any spam baiters want the e-mail address, @ reply me on twitter.

From: UNITED NATIONS ORGANIZATION
Subject: CONGRATULATION !!! CASH GRANT WINNER.

Monday, July 5, 2010

Fourth of July Fireworks

Spent yesterday on campus hanging out with friends for the fourth of july.  The night ended (as Independance Days tend to) with a fireworks show.  I brought my video camera and tripod and caught the whole thing.  Even though I was lazy and kept the settings on auto, I think it turned out fairly well.


Friday, May 21, 2010

Long Facebook "Page" Names: What Are They?

Yesterday I began to notice what appeared to be fan pages with extremely long names that people had "liked" scattered throughout my news feed.. Now, I refrained from clicking the "like" button next to them, but many of my friends did not. After the jump is an explanation of what exactly these are, and why I believe they could be a problem.

Sunday, March 7, 2010

Some Random Updates

I don't know if you've looked at my goldfish lately, but they've gotten much bigger in the past few weeks.  They've grown at a rate that surprised me, to tell the truth.  I mean, don't get me wrong, I knew they'd grow, that's what living things tend to do, I just wasn't expecting it to be quite this fast.  Luckily I was planning to move them to a larger tank soon.  "Soon" just happened to be another thing that snuck up on me faster than expected. All is going according to plan though, so by the end of this week they should be in a shiny new home that is much, much bigger than their current one.  This gives me more freedom in caring for them (since water changes won't be required quite as often), and it also allows me the possibility of a couple more fish.  What fish I'm planning on trying to get is a secret, and it is a little ways off before I make the final decision, but I'm excited nonetheless.

Tuesday, February 23, 2010

Admiral Ackbar and Southern Pride

Today we are having a vote at Ole Miss. Our beloved mascot for decades, Colonel Reb, was removed from the field in 2003. This left the Rebels as the only SEC team without an on-field mascot. It is now 2010, seven years later, and we are still without a mascot. The vote today is on this topic, but what it is exactly that we are voting for is puzzling.

Monday, February 15, 2010

New Video!

Here's my newest video that I have online. I really like the way this turned out. This video was taken this past December.

I had just gotten the camera, so I was filming pretty much everything that weekend, and Zach and I decided to go to the lake for a guy weekend of video games and playing guitar.  We found the lake as the video shows, completely frozen over, so we had some fun before getting down to the business of Xbox.

You can find Zach at his Youtube profile, be sure to check it out and drop him a comment or two.

Saturday, February 13, 2010

Blackberry

Those who know me know that I'm a complete Blackberry fanboy.  As such, I see it as my duty to read Crackberry daily and listen to their podcast.  Recently they had a post about how Jeopardy got a question wrong pertaining to what the company is.  Jeopardy says "Blackberry" but we all know it really is "Research in Motion." They even posted a video clip.



Well, I was watching yesterday's episode today (DVR is an amazing thing), and caught a question where they got it right.  Is someone at Jeopardy reading Crackberry? Here's my video clip.

Prime Numbers

Recently I've become fascinated with prime numbers.  I believe it started when I was reading an article on how researchers recently cracked 768-bit RSA encryption.  The past few months have seen me becoming more and more interested in different fields of mathematics, and this is just the latest one to have popped up for me.

For people who don't know, a prime number is a number that is only divisible by one and itself, and one is not seen as a prime number, though it was considered one in the past.  A step further takes you to superprimes, which are prime numbers which have a position in the list of prime numbers that is also a prime number.

Why is this interesting to me?  Honestly, I'm not really sure, but I've been checking books out at the library on number theory and how prime numbers have been found historically among other similar topics.  I don't have a readily applicable use for this knowledge in my own life, but just for kicks I've been using it to write a program that goes through numbers and determines their primality.

I have a very basic version working currently, and my goal is to optimize it as time goes on and I learn more about optimizations that can be made.  Eventually I plan to have it run on multi-core systems, and also determine if numbers are superprimes.  Code for the basic test program will be added to the "Code Corner" page, and this will also be added to my "Projects" page, since I plan to work on this for a while.

Monday, February 1, 2010

New pages

New pages are being added to the site!

In addition to the page with the embedded Ustream, a new page featuring source code that I will release under a Creative Commons license has been added. A link can be found at the top of the site titled "Code." Also a link is at the bottom of the site to our current Creative Commons license terms if anyone would like to see them.

Another new addition to the site is the "Projects" page, in which I will detail current and future projects that I am/will working/work on. At the time of this posting, the page isn't linked, due to me not feeling like linking it right now. This will be remedied soon though, and if you really want to see it, it isn't too hard to find.

The addition of these pages should allow me to put much more content on the site in an organized fashion.

Other News:

The sidebar has also received a few new sections. One called "My Stuff" links you to content I've created around the web. The other called "Interesting Sites" links you to just that, sites that I find interesting and/or are run by friends or people I look up to.

Sunday, January 31, 2010

Computer Lab

It's interesting when I walk into a computer lab on campus. I always feel I have two choices when I walk in. Do I optimize my computer decision based on having an easy escape route from the room? Or would it be better to make the decision based on no one being able to see my screen?

Should I compromise a little on both?

If I compromise, how do I know which people to give the lowest threat ratings towards?

It's never as simple as just walking in and taking the closest computer, because you never know just how dangerous the computer lab can be. You never know just what the person next to you might catch on your screen, what they might be able to use as material for blackmail.

Maybe they catch your password and log in to your sites, wreaking havoc on your online identity. Or maybe they find out all about you, tracing all the sites you went to, finding your profiles and building a dossier on you to add to their collection.

Or maybe I'm just paranoid when I use public computers.

Tuesday, January 26, 2010

Goldfish Feeding Video!





A short video of Matcha and Sencha eating. I think they were a little camera shy! Normally they chase the food around the tank a little more agressively.

New Years Resolutions

Most people set their new years resolutions on New Years, if not before. Another thing most people do is slack off on what they had resolved to do by the end of January. Due to this, I've decided that waiting until the end of January is a much better idea, so I can't slack off by the end of January, thus bypassing the problem.

Monday, January 25, 2010

Goldfish

My brother has an aquarium that he keeps fish and turtles in. While helping him with the aquarium, I found that I really enjoyed learning about the fish and caring for the aquarium. I had found myself a new hobby.
The last time I had an aquarium was when I was little and we lived in Hawaii. I don't remember much about that, just that we had one and we had some fish. Yesterday I bought two fantail goldfish to start my new aquarium, and I've been watching them swim around happily since then.
When my fascination with aquariums started a few months ago I decided to get online and see if i could find any live videos of aquariums. I stumbled upon Jason's Fishcam and knew that I'd want to do something similar with my aquarium, if for no other reason than to be able to enjoy my aquarium wherever I happen to be.
If you want to check out my aquarium, you can find it on my Ustream channel, where it will stream pretty much every day.

Hope you enjoy, and check back often as I update about my aquarium and whatever else I feel like talking about!