Friday, December 17, 2010

File Systems 3

Now I realize I should have made my blog posts with the chapter name and a number signifying which one it is. Oh well.

Now on cp command. Basic syntax for copying something would be cp this_file /over/here. Now switches

-f does not ask when overwriting files
-i interactively asks before overwriting
-r recursive
-s creates symlink to the source file
-u only copies when the source is newer or there is no pre-existing target.

mv on the other hand moves files. It uses the same syntax as cp and switches (except recursive because it is recursive).

Here's something that I would just use a GUI but anyways. The dd command (according to the book) is often use for duplicating things like a CD .img or partition. So an example would be.

dd if=/dev/fd0 of=/dev/fd1. if is inputed, of is output and this would do a 1:1 copy of a CD to another CD. The same thing can be used for a partition, just change the devices. If you want to backup the MBR you would do.

dd if =/dev/hda of=/root/MBR.img count=1 bs=512. count sets the number of reads from the input file and bs sets the block size so you don't copy all the blocks.

Lastly mkdir/rmdir and rm. mkdir makes directories. To make a directory you would do mkdir directory or mkdir directory/directory2 to make a subdirectory. rmdir works the same way but the directory must be empty.

rm removes files AND can remove directories full of files. You don't need me to tell you the syntax so here are the switches.

-f removes a file without a prompt
-rf removes all contents in a directory with a prompt.

Enough for now...

Wednesday, December 15, 2010

File Systems 2

In the root of the filesystem (/) you have the common directories (15 of them). Here are a few.

etc-Configuration files
dev-Device files
home-Home directories for users
lost+found-Henry explained this one. It's where files go when fsck finds corrupted files.
opt-Third-party applications
root-Root user home directory

Here is the redundant stuff I was talking about earlier.

As you know cd is used to navigate the file system. That's it on cd. ls is more interesting.

ls lists files and directories.
ls -l gives you the permissions, links, date, group, and owner.
ls-a lists all files (means it lists hidden files too)
ls -i lists inode info
ls -lh shows "human-readable" output so that means things are in KB, MB etc...

Command here that I didn't know about is file.
file basically tells you what kind of file the file you're looking at is. So...
A text file might read out as text_file: ASCII text or if it's a binary it will tell you what kind of architecture it is.

Finally there's touch. touch can be used for different purposes the basic being creating a blank file like "touch blankfile". touch is also used to update the file's times. So if you wanted to change the file date of blankfile you would do touch -t yyyymmddhhmm blankfile, I believe that is self explanatory. One cool thing you can do is touch -r blankfile blankfile 2 where blankfile's time is copied to blankfile 2. Yup.

Monday, December 13, 2010

This Week - Dec 13

I will continue reading into the File Systems chapter most likely finishing this week. After that there are 3 more chapters so I should be able to get things done with time to spare.

Referring back to Mr. Elkner's previous question about how useful the LPIC. I haven't finish it yet so I can't really comment on it but I would say that it would be best to have students who are interested in being a sysadmin, have them work the first year doing sysadmin things... but also make sure they get to try the basic stuff that is in the book so when second year comes around they won't have to use as much time reading some chapter they already know. Of course they're going to have to learn about the old crap too but that can be reserved for second year also.

Friday, December 10, 2010

File Systems

First I'd like to thank Henry for sending me the ebook for I could not have finished this blag post without it and that's it.

The partitioning and file systems is the current chapter. Beginning talks of the basics such as logical partitions and swap. According to the book fdisk is the most popular partitioning tool and I can't disagree.

To edit the device that you want you would use fdisk /dev/device_here. It shows how to partition a machine and it's funny how at the end of that it says "This works only on a machine that you are destructively partitioning for an installation". Whoops! Anyways...

fdisk -l lists your partition tables. Here's an part of an output from a computer of mine.

Device Boot Start End Blocks Id System
/dev/sda1 * 1 8486 68160928+ 7 HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sda2 8486 60801 420223041+ f W95 Ext'd (LBA)
/dev/sda5 8486 34692 210500608+ 7 HPFS/NTFS
/dev/sda6 34692 34756 512000 83 Linux
/dev/sda7 34756 60801 209207296 8e Linux LVM

It's actually very screwey but as you can tell sda1 is the boot partition and it's windows, sda2 is a second windows partition, sda5 is a logical partition (always starts at 5) I thought I deleted long ago... and the last one as you can tell is the Fedora partition.

Last part I got to was superblocks and inodes. A superblock is basically a block of data (36 bytes) containing the file system size, location, number of inodes, and disk usage. There are many of them throughout the device occuring every 8192 blocks. They are important because if part of a disk is corrupted and a superblock is part of it you still have other ones to fall back on.

Inodes are assigned to a file when it's created. They are pointers to disk bocks, groups of eight, and the indoe is assigned to the block group in front of it. There are only a limited amount of indoes with only three sizes available when you create a file system.


Smaller means more inodes. You could run out of inodes and still have hard drive space but you wouldn't be able to use it. So when you are creating the file system you can use mkfs with one of three switches.

news-Indoes are 4KB per block
largefile-1MB per block
largefile4-4MB per block

largefile4 should only be used for databases.

My journey ends around the mkfs command which is what you use to make the file system and there are many file systems to choose from. That's all for now.

Wednesday, December 8, 2010

End of Hardware

In the chapter it talks quite a bit about the Point to Point Protocol. Using it's examples if you want to make a PPP link you would use a script that looks like.

chat " " ATZ OK ATDT5558080 CONNECT " " login: username word: password Mind you this is all about modems which is pretty much irrelevant these days but I'm assuming that they want us to know this...

Chat wakes up the modem, ATZ resets it, ATDT and CONNECT dial the number and then the login and password prompts are used for logging in.

Still not done. After that you must establish networking (chat just opens up the connection) by using pppd (point to point daemon). Another example.

pppd /dev/cua1 57600 crtscts defaultroute

ppd /dev/cua1 changes the interface over to a PPP connection and sets up networking. 57600 specifies the estimated speed and no that is not kilobytes, it's baud. crtscts performs a "handshake" and defaultroute sets the default gateway to the remote machine's IP. I obviously couldn't test any of this like the previous commands so I'm just going to have to go with their word.

Apparently the LPIC assumes you have servers with SCSI so...

The SCSI bus type determines the density of the devices so 8-bit SCSI is eight devices and 16-bit 16 devices. The number that you associate (1 -7 or 1-15) determines the priority it gets accessing the SCSI bus. Usually the highest number is the highest priority.

Any info on SCSI devices on your machine can be found in /proc/scsi/scsi and then the chapter abruptly ends. I took the exam after that and got an 8/10 which I think is pretty good.

Monday, December 6, 2010


I'm going to read more of LPIC as usual which this week involves finishing up the hardware chapter. We're going to get this done by January since we all decided to get the exam finished to "prove" that we are actually working and so that we have time to focus on getting the lab fully autonomous if you will. Hopefully after that it will require no intervention at all until the time comes to upgrade... But we'll be long gone by then :D Just kidding. I think Devin and Jason will have enough of a handle on things to take care of the lab.

Friday, December 3, 2010

Snippets of LPIC

The hardware chapter of the IBM docs focuses a lot on legacy hardware so I'm going to tell you about the most interesting stuff I read. A lot of this still assumes you have to manually configure IRQs and it seems to think that the latest version of Fedora is Fedora Core 5.

Modems (modulator/demodulator) translates digital data from the computer into an analog stream that can be sent over a phone line. They used to be external, then built on cards and then to save even more manufacturers made "winmodems" where some functions of the modem were offloaded to the PC and the drivers for it were written for windows.

IRQs. Back in the day every device would have it's own IRQ. Now devices share IRQs and when the CPU is interrupted it performs and interrupt check to see which device interrupted it. dsmeg can show you the IRQ information and you can use it in conjunction with grep so you don't have to sift through the whole output. Henry also showed me how to find a devices name using dsmeg | grep expression_here after I heard him talking to Devin about mounting a drive. I always had to use some convoluted method I found on google involving fdisk whenever I wanted to mount something. Anyways thanks to PnP (Plug and Play) you don't have to worry about IRQ and COM ports these days.

I fell ill Thursday but I'll tell you that story about the game server. It's not interesting and doesn't involve any epiphanies. Basically I run a game server where there are no rules per se but there are people breaking the boundaries of the game by exploiting glitches. Unfortunately I have no evidence (logs aren't any help) and but I have a fairly good idea on who it is. I could ban them but I prefer to be able to lay out the evidence. The problem is this affects the other players so basically do I ban someone without any evidence, hope it rids the problem or wait to get some at the expense of the other players. (I'm not asking you to answer that, that's just the question swirling around in my head). Told you it was uninteresting.

Saturday, November 20, 2010

Rsync Done, LPIC and Minecraft

Friday I began looking at the IBM docs that Matt was talking about. I couldn't get to read them earlier since I was working on that backup system. I didn't get to read much but it is really short and if Matt could pass with it then I think I could...

So at the moment I'm reading those docs and I'm waiting on Devin to get the server ready so I can deploy that rsync script. I talked to him on Friday and he said he would give me an e-mail when it's ready.

I also played around with Fedora a bit... I have a computer at home that I use as a server that is running Fedora for the LPIC. I had to "make" a program and I had some missing dependencies(How am I supposed to know what you want?). I installed 2 different things (using yum!) and one after another I found another dependency so I just did a "groupinstall" in yum and I hope I finally nailed it. In a real environment I would do that but I was a bit "flustered" with Fedora (WHAT'S THAT? YOU WANNA INSTALL SOMETHUN? GOTTA USE ROOT! NO SUDO FOR YOU!) so I didn't want to deal with it. I also learned some things about being an admin on a game server (witch hunts) but that something else...

Wednesday, November 17, 2010

Script Ready (Essentially)

Monday and Tuesday I worked on getting key based authentication ssh. I had to generate a private and public key on the source machine using ssh-keygen. You have to copy the pubkey from the source to the target and in the sshd config enable key based authentication (In Ubuntu it seems to be on by default). Once that is done you can ssh in from the account that created the key into the account that holds the pubkey.

So Wednesday Matt helped me with rsync. The two echo commands are from another script I found on the internets. As you can see the the script runs the rsync command and makes a log of what happened and appends the date to it. I included comments with what the parameters do since there are so many.


echo $’\n\n’ >> rsync.log
rsync -avzogtph /home/sysadmin/stuff/ sysadmin@
echo “Completed at: `/bin/date`” >> $LOGFILE

#-a archive mode
#-v verbose
#-z compress
#-o preserve owner (super-user only)
#-g preserve group
#-t modification times
#-p preserve permissions
#-h human readable

I will talk to Devin about getting it set up.

Friday, November 12, 2010

Rsync and vim

Monday and Tuesday was vim. Apparently you can run a command inside vim by adding ! before it. Seems very nice for when your physically work at a server and you don't have to close out of vim. Another very awesome feature is that you can split your windows vertically or horizontally in vim to display two files or the same one. You either run split or vsplit and the file after it if you want the window to have another file. Most of the time I don't need this since I can just ssh multiple times but as before this could be a big help when interacting directly with a server. Onto rsync.

Jeff wants a backup in place for our LTSP and he pointed me towards rsync. Rsync copies files but as implied it syncs them so it only overwrites a file if it has changed so less bandwidth is wasted and time. It can also retain file permissions which is cool beans. It will also be run as cron job so it occurs every night. I've researched it and this seems to be the most comprehensive article on rysnc and cron.

What will happen is I will generate ssh keys for our servers because without them it isn't possible for the LTSP server to run the script without entering a password. Once the keys are in place and the ssh config is modified I will create a script along the lines of this.



echo $’\n\n’ >> $LOGFILE
echo “Completed at: `/bin/date`” >> $LOGFILE

I'm debating whether to just make the script one line or use this one where you modify the variables. After that I will use crontab to have the script run ever night and backup the home directory and everything will be hunky dory. Or so I hope.

Thursday, November 4, 2010

Vi and the Vim

I taught Henry how to edit previous blog posts. The end.

So after that lesson I read more from the book. As you may already know (YOU should know :P) to edit or create a new file you would run the command vi /your/directory/and/file/here. Once inside vi you use the lowercase i to enter insert mode where you enter text. Apparently I does the same thing but takes you to the beginning of the line. There are also a few other insert methods. Also another cool feature when opening a file is adding +(your number here) and it will jump to said line in the file. To navigate when not in the insert mode you can either use the arrow keys or HJKL H being left, J down, K up, and L right. Personally I use arrow keys but nethack may force me to use HJKL.

Paragraph break for the eyes. So more editing commands. To undo you just simply press u but sometimes I use q! when I accidently edit something I probably shouldn't have. I know I can use u but... To save and exit a file the the wq writes and quits. Simple. There is also the x which saves and quits and apparently you have to know the two for the exam. More stuff. dd deletes an entire line and yy copies an entire line. When working with the hostname scheme for our lab yy was useful so I wouldn't have to exactly replicate a four line block of text with proper syntax. p pastes whatever is in the unnamed buffer as the book puts it (clipboard sounds cooler). Some more advanced copy-paste that I am unlikely to ever use is the fact that vi has 27 buffers (one is unnamed) and you can copy things to any of them if you specify it. If not it just overwrites the unnamed buffer. Example "ayy would yank the line to buffer a and "ap pastes it.

Lots of stuff to cover in vi unfortunately but I'm almost through with that chapter.

Monday, November 1, 2010

Quarter In Review

So totally not copying Henry's layout of the review which I must say is quite pitiful here is my superior post. Any similarities observed between the two posts are in no way related... except for the fact we worked together most of the time.


What did I accomplish during 1st quarter?
First quarter Henry and I (and with the help of Matt, wouldn't have figured out bullcrap config without him) set up the LTSP server. That's the major accomplishment. After that we both set up static leases, ingenious hostnames and are now studying for the LPIC.

Did my accomplishments meet my expectations? Why or why not?
Yes/no. When I walked into the lab day one my first thought was to get that LTSP server working completely and hadn't really thought of anything else. I do have the LPIC which is ongoing so I can't say I have accomplished that. So yes I did meet my expectations but that was my only expectation.

What did I learn during 1st quarter?
I learned the linux from the LPIC somewhat. Still ongoing. I could probably set up another LTSP server no problems next time. I can't say I learned one exact topic like I learned networking (which I did last year) but I have learned a bunch of different linux things. I also did some slight documentation. Yay google docs.

In what ways will this knowledge be useful to me in the future?
Pretty much everything I have been doing can and probably will be useful to a sysadmin. Even if I don't become a sysadmin everything is still relevant to someone who likes linux which I don't plan on abandoning. Hopefully they'll get some vidya games.

What new skills did I aquire? What can I do now that I couldn't do when the 1st quarter began?
As I said I learned some linux stuff like tcpdump and other commands. I have a better grasp on commands I already knew. Like kill for example.

What will be the focus of my learning during 2nd quarter?
My focus of learning for second quarter is studying for the LPIC. Maybe the schooltool server depending on what is going on with that.

What new skills do I plan to aquire during 2nd quarter?
More linux. As in skills to more efficiently run a lab/network.

What do I plan to accomplish during 2nd quarter?
Finish with the LPIC? LPIC LPIC LPIC LPIC. That's my main goal. Finish studying for it and get it over with.

Friday, October 29, 2010


The entire week I spent reading about LPIC. At first I read about the shell. A user's shell is set in /etc/passwd and defaults to bash if it doesn't exist. When a user first logs in the /etc/profile is "sourced" as the book puts it. Can I say call? Call sounds better. We're going with call. Correct me if I'm wrong. So once that is done the bash_profile is called and finally bashrc and if for some reason one of these scripts does not exist it moves onto the next.

I flipped forward a bit and read about Special Characters many of which I know of. Examples are the && which performs a command if the first command succeeded, | pipe which pipes the output to a program, || like and except if the first fails THEN it performs the second, ; which just executes command and finally > and < which pipes the output and input into a program respectively.

I also read about the ps command which like before I have dabbled with but the book has you learn all it's switches. ps basically just gives you a list of processes for the current user. ps -a shows ALL processes. A really neat one if pstree which gives you a tree of processes so you can see what process started what with all beginning at init.

Henry also showed me a bit about kill. As you can guess it kills a process but there are 60 variations. The default kill sends the kill command but gives the process a chance to save, kill -9 kills the program outright, kill -1 restarts the process.

Next part of the chapter is "Managing Process Priorities" and then next chapter is vi. A whole chapter dedicated to vi. I also took one of the books home but I'll have it back tomorrow.

Saturday, October 23, 2010

Friday (Saturday) Post

What I did this week. This week Henry and I played around with tcpdump. While we didn't know what each packet meant being able to see what kind of communication and when it was occurring is something that is pretty useful. We noticed that there were ARP requests were being sent out and those machines running sugar kept contacting other machines which I assume is part of the LAN discovery.

I also created a google doc that has a list of all the phobias we used for our hostnames. At the moment I don't see the doc being very useful since if you need to add or change a phobia you have to jump into the dhcpd.conf anyways so everyone will see your changes. Of course documentation outside of the config is nice though. I'm hoping to add to it (maybe MAC?) so that when people need something one person will be like "Hey guys let's just consult Steve's Google Doc" and then the other guy will be like "Yeah!" and then high fives are issued.

Lastly started reading on fdisk in the LPIC. One thing I take issue with this whole certification is you're left without docs. Now I realize that this occurs with every test but it's still something that annoys me. When I first used fdisk I never knew how to use it but the man pages helped me out. With LPIC you're not just expected to know what the command does but you have to know what a certain parameter does or what you need. I can understand if they ask you how to do something common like create a tar.gz which you should know by heart now but Henry and I were looking at the man pages for a command (can't remember which one) and whoever made it must have had an evil sense of humor. There were six parameters that triggered different results BUT the way they were issued was the next parameter had another t. For example, sudo makemasandwich -t would tell you the time when making the sandwich, -tt would add some mayonnaise, -ttt would get the newspaper for you and compliment you on your hair and -tttt would launch Skynet. My long complaint is it's unrealistic to expect someone to remember this. People remember what they use often and that's it. So while I may learn this stuff now if I don't use it often it's a waste of time. I know that this won't get me out of doing the work but it's nice to rant. Whether the LPIC forces me to learn this kind of stuff will be seen.

Monday, October 18, 2010

Monday Fluff

Today me and Henry talked to Jeff about Wireshark. The problem at the moment is you can't capture packets that aren't traveling to your computer. If we had a hub it would work since the hub broadcasts everything on every port (yay something from Net+). What we were thinking is using wireshark on the LTSP server but rather we're going to try using tcpdump.

So for this week we're going to work on tcpdump and the rest is to be determined. We have to wait on the Jabber server patches according to Jeff.

About LPIC, I still have plenty of studying to do with not so much time.

Thursday, October 14, 2010


As you may know we were working on using the XS school server, which runs ejabbered, so we would have our own little community. That didn't work out so now we have a regular ubuntu machine running just ejabbered. Unfortunately it doesn't work out of the box. I found this which states that you have to apply some patches to ejabbered to get it working with the XOs. Henry installed ejabbered on his machine so we can play around with it. One problem I just realized is that aren't there firewalls between us and the rest of the internets on the network we're on? We need to be able to test this with someone from outside the network. Have to figure that out Monday...

Monday, October 11, 2010

Yay NY

So to wrap up last week. (Shoulda posted this Friday). The hostname changes are done. And the server is kinda done. The end.

Ok more. The server is running but the docs for XS aren't good at all so we didn't really know what to do trying to set it up. According to Henry (he did this) the server gets connections from the computers but nothing happens afterward. By tomorrow I hope we can get this done.

Monday, October 4, 2010

This Week

My goal for this week is to finish setting up hostnames on every computer and possibly working on the XS server. Hostnames will be easy and XS server so far looks easy enough to set up also. The way it works is OLPC USED to have packages for different distributions that contained the XS server software but it's outdated and is meant to run on Hardy. Their latest software is one of those custom distro images which is based on Fedora. Devin has given us one of his servers which should be sufficient enough for the server. The basic specs recommended are a 1GHZ processor and 1G of RAM and the server he gave is something like a 2.4 Dual Core with 3G of RAM. If all goes well Henry will work on the server while I finish hostnames.

Friday, October 1, 2010


So me and Henry have been creating static leases for each machine on the network and giving them new hostnames. The layout will be like last year just different hostnames. Should be done by Monday and then working on Tuesday. (Need to reboot the LTSP server or restart dhcp for it to take effect).

Here's a more verbose version.

Today, Steve Henry and I looked over a list of phobias and decided on some to be the hostnames for the lab, before going and editing the hostname files in each computer we had time for. While I Henry edited the hostnames I Henry also called out the MAC addresses to Steve me, and he I entered them into dhcpd.conf on the LTSP server, setting up static IPs for each of them in the same scheme as last year. As per Jeff's request, getting the lab ready for the October 16th meeting will be our top priority, and we will continue this on Monday. About half the lab should have static IPs and updated hostnames now, and we should be able to finish the other half on monday.


Thursday, September 30, 2010

Phobias and Static Leases

So me and Henry are both working on assigning static leases to every machine on the network. We're going to map out the IP addresses like last year where the machine IP address is based on the order of the machines in the lab. Here's the forum post that we're using to assign static IPs since Ubuntu Server Guide wasn't any help. (Neither was Google, apparently there are many ways to say static lease!)

Now what is this of Phobias you ask? Well we decided to make every computer's hostname a phobia. You can thank The Phobia List for the phobias.

Apparently Firefox thinks my entire blog post is spelled incorrectly. Now I can't tell if something is incorrect D:

Friday, September 24, 2010


So LPIC. Linux Professional Institute Certification. For the sake of blogginess I will post what I'm doing this year (even though YOU know). I will study for the LPIC 1 certification first semester. It is composed of two exams. As you can hopefully discern from the name of the cert this is a cert for Linux (Professionals!). The beginning of the book is all about basic computer things. I plan on buying another book (or you could buy me one with that budget of yours :) because this book is slightly outdated. The book assumes that there are still deb and rpm tests instead of one.

I will also work on a server but I'm not sure how that will work out at the moment. Yup.

Monday, September 20, 2010

Zentyal Post 2 thing

So as I said I'm trying to set up LTSP/Zentyal in a VM but still keeping things nice. At the moment there isn't official support for LTSP (there will be) and there doesn't seem to be much demand for it from the community. Hopefully it will be released soon but I'll just attempt to get it working as is.

Also LPIC. I guess I'll start reading the book. I haven't actually read it yet but it thankfully comes with a .pdf so I can read it at home. I'll try and find a test online so I can see my current abilities. Hopefully I'll be adequate.

Thursday, September 16, 2010


At the moment I have Zentyal running on a VM in the lab. What I plan on doing is playing with it on that first. If it is possible to get LTSP and Zentyal working together nicely I will install it on one of the terabyte drives. Longer post to come.

Another thing to note. I think we ought to compile a list of packages for the LTSP server and Jeff can install them after school.

Monday, September 13, 2010


For the first quarter I am going to work on getting ebox, I mean ZENTYAL, working in conjunction with our LTSP. I'm also going to dual enroll in a NOVA course which I will find out... That's pretty much the gist of it right now. I'll have actually info on how ZENTYAL and LTSP work together later when I start working on it.

Friday, June 18, 2010

End of year


I know I don't have to do this but the blog needs an end post before it dissolves into the intertubes. So without further ado. Have a good summer!

Insert a picture of your favorite vacation spot here

Thursday, June 10, 2010

Almost done

Second to last week. Woooo... (Note I am actually glad that school is almost over [except for your class of course] just everyone is swamping us with work to get us ready and it's making me tired...). Anyways.. Monday I installed Ubuntu 32-bit Server on the machine we're using. It took just about the entire period. Tuesday me and Henry stayed after school to work on it. It was decided that we're going to use the old packages instead of trying to work with the undocumented new ones. Henry installed Debian over our installation and then I installed Ubuntu again because Debian was being annoying. Tomorrow the real work will begin, again. Also I will be taking the exam during the final week since they only schedule the exams from 9 a.m. to 3 p.m. weekdays. Yup.

Friday, June 4, 2010

Long week...

Not really. See? Irony. Yeah... We only had two periods this week and each was less than 30 minutes so you can imagine a lot got done. There's irony again! Okay I'll stop. So Tuesday I did something. Thursday I spent the period downloading the .iso of Ubuntu Server 32-bit. I got to the the formatting screen in the installation when the period was over. So if you count burning a CD, getting halfway through the installation, then quiting, doing something on Tuesday and getting the test code for the exam, I did work. It was back breaking.

Fun fact, there are six instances of irony in this post.
The sentence above is somewhat ironic if you don't count it it as irony. But then that would make the sentence true thus making it not irony.
Ironic isn't it?

Tuesday, May 25, 2010

I was mistaken...

So I read up on Kerberos, LDAP and NFS. So Kerberos is an authentication protocol that allows you to authenticate users over a network, securely. LDAP is what Wikipedia puts it an "application protocol for querying and modifying data using directory services running over TCP/IP" and an example it being something like a telephone directory. NFS allows a user to access a directory on the network as if it was one their machine. So assuming everything was simple and went honky dory here is how a SSO would lay out. (Over simplified version up ahead!)

So there would be two servers on the network (you could run it all one three or one if you want things to be slow). One server would have Kerberos and LDAP. Every client would also have the Kerberos client installed. On the KL server (see the acronym there?) clients would authenticate using Kerberos. Once authenticated LDAP would point to the NFS server where then the client would mount the user's home directory, or whatever you want. Here's a nice article (written for Red Hat) that shows how to create a Kerberos, LDAP, NFS system.

Nice article.

One thing is I don't know what kind of budget Senior jelkner (yes jelkner) has or what kind of hardware would be necessary. I assume for the KL server you wouldn't need a powerful machine but mounting the home directory of everyone in the class would be taxing on the NFS server and bandwidth hungry.

Wednesday, May 19, 2010


Not gonna happen? So still waiting on the exam date. So... yeah. Since I don't have much to do I'll just write about my practice exams. Oh one thing. The clocks on the computer are always out of sync. IPCop can work as a time server so I'm going to try to get that working.

And today I decided against it since it seems my computer is the only one out of sync for some reason. Gonna fix that and if the need arises I can use IPCop for synced time.

This week hasn't actually been very productive. I've been just reviewing the book since the SSO isn't moving forward, yet. Waiting on the exam date and that's it basically. More review of the book next week. How's about a network related comic for filler.

You'd better have laughed, otherwise I don't know what you're doing here.

Sunday, May 16, 2010

Wait wait wait

I'm here. Should have posted this Friday but in any case if it still applies...

The last chapter discussed management of network documentation. So let's say you take a new job and the guy you just took over managed THE INTERNET. Well unfortunately for you he didn't document anything thing like VLAN's, hardware, addresses or any of that stuff. Now you have to go through the network and figure it out for yourself and the internet is pretty big... So by doing the guy after you and yourself a favor by documenting everything nobody will be left with large headaches. Some things it mentioned to document but I never realized is baseline stats of servers so you would measure let's say the load on the CPU, RAM and harddisk and then for future reference if the numbers have changed you know that something is wrong.

Optimizing your network. Assuming you run a big network you'll have a lot of different traffic moving through. You wouldn't want people playing video games getting priority over people who are making calls over VOIP. By using QoS (Quality of Service) you determine what packets have a high priority and those that don't. So naturally VOIP would be high on the list. Videos would probably somewhere low so in case of high traffic situations video packets may be dropped. For web-hosting you may want to use load balancing so when millions of people are accessing your website they aren't all punishing one server and instead are spread out over all your web servers. Lastly you want fault-tolerance so if and when a server or hard drive fails customers can still get access to their data.

Something I noticed while reading other peoples blogs, that's what I do at night sitting in front of the glare of the monitor anyways, I read it in their voices. Voices in my head! Also free Portal if I didn't already mention it. Which I did.

Wednesday, May 12, 2010

Blocking computer access

So here's how to block access to the interwebs. No pictures since I don't have time to get them.

Go to the IPCop router homepage and login

On the top go to Services then Advanced Proxy
Scroll down to Network Based Access Control

Under banned I.P. address enter the I.P. you want to restrict access and then scroll down and press Save and Restart.
Here is the I.P. chart. All the desktop I.P. address are up to date.

Note: the three laptop I.P. addresses are (from right left) 117, 128 and 124.

Friday, April 30, 2010


The End. The rest of the chapter is just a more in depth explanation on troubleshooting a network. I don't really understand why it went through a quarter of the chapter with overly simple version and then an in-depth version. Also these posts have become harder to write due to the fact that the last chapter is just steps to do whatever and not actual concepts. But I will prevail!

So right now I'm reading about policies and regulations. Policies are basically what you do under certain circumstances like when a user is locked out of an account, a hacker breaks into your network or when a gia... nevermind. Procedures are procedures which you do when a policy comes into effect. Regulations are rules imposed by guvamnets and other organizations that your company must follow if you don't want to get hauled off to jail.

So this post has been sitting in my edit post section for a bit. Oh well here's some more stuff.

I went back to subnetting and I do understand it better than I did before, no thanks to the book BUT thanks to Ralph Becker's IP Address Subnetting Tutorial.


So an I.P. address is composed of two parts, the network address and the node address. If you have a Class A network the first octet is the network address and the last 3 the node address. A Class B network the first two octets are the network address and the last two are the node address. I hope you can guess what the Class C address is. When the node octets of the address are set to 0 you get the network address. When they're all set to 1 you get the broadcast address.

Now here is where I got caught up which is subnet masks. Mr. Elkner gave me an explanation of subnet masks and I will reiterate it for my own clarification. When you have a network address and you want more subnets than nodes you can apply a subnet mask to give yourself more subnets. So let's say you have the class B address The first two octets are the network address. The last two are the host/node address. By applying a subnet mask you get more subnets. Everyone outside your network sees your network address but inside the network you have your own little system of subnets.

Wednesday, April 28, 2010

User Error

So back to last week I said I had something super sorta cool in store depending if you like video games and it is somewhat related to programming which my blog isn't about BUT ANYWAYS I didn't post about it since I was trying to find a video of it in action so when I find one I will. Also Jelkner (I will hereby refer to you by your username) mentioned acquiring the network hardware I wrote about. Well unless we already have this stuff you would have to pay hundreds of dollars on this stuff. Except of course a punch-down tool... Now onto USER ERROR and errors.

When something is wrong it's always the user's fault. At least in my experience. Basically the entire chapter is diagnosing problems. First you start with the simple questions when a user can't log in like is the computer on, is the caps key on, do you have more than 2 brain cells? If somehow that doesn't work then you have to haul your arse down to the user and try things out for yourself. If it's hardware replace it, if it's software re-install it.

Now if after doing your magic the workstation still doesn't work and trying the user's login from another workstation doesn't work either you has a problem with your segment. Check the server for user permissions and if that isn't the problem have fun. (Just so you know I read the chapter, write up what I read and then re read certain parts to make sure I got the right info down.)

So if it's not the server then you have to check the cabling for things like crosstalk where to cables are bleeding onto each other, attenuation where signal is degrading over distances, collisions although that shouldn't happen in this day and age, electromagnetic interference and it could just be a bad cable.

The next part discusses troubleshooting when you have a wireless network and then the next next part of the chapter gives you steps to take in solving network problems. Yay.

Friday, April 23, 2010

Extra long (shut up)

Well first let's start on my progress. I am on the second to last chapter. After I'm through with it I'm going to go back and reread and test the concepts I still need a better grasp of.

So back to what I'm learning. Certifier, checks to make sure your network follows standards, costs a lot of money. Time-Domain REFLECTOMETER! It sends a signal down the a copper wire and if there's any interruption in the signal some of it will reflect back to the TDR. This allows you to check like speed of the wire, how much lost and cable length. Optical Time-Domain REFLECTOMETER performs the same function as the TDR except it's meant for fiber cables so instead of electric pulses it uses light and instead of measuring a response it measures the amount of scattered light. NEW PARAGRAPH.

You all probably know what a multimeter is. It just measures voltage, current and resistance. A Toner Probe is a somewhat nifty device where you connect one probe to the end of a wire and then using the other probe you can find that same wire in a bundle of wires since it listens for the tone.

That was a bit of a boring post but don't worry! I have something else in store!.

Thursday, April 22, 2010

Network Analyzers

Yesterday I downloaded Wireshark. I've actually used it before once in conjunction with another little program... Anyways the reason I downloaded it was the book recommends it as a packet sniffer. One thing I forgot is you have to set your NIC into promiscuous mode which may require special drivers and running a packet sniffer on the school network may not be the best thing to do. So onto hardware testers.

A wire-map tester is one of the most basic cable testers you can buy. It just checks that the wires in a twisted-pair are in their correct places. It can also check for broken or unconnected wires. Surprisingly this basic tester starts at around 100 dollars.

A protocol analyzes, wait for it, protocols! They come in hardware and software forms. It allows you to troubleshoot problems on a network, doesn't really explain how..., gather traffic info, find unused protocols to remove from your network and traffic generating for penetration testing. And that's that for today.

Thursday, April 15, 2010


nslookup is a command which shows you your DNS and the I.P. domain translation. Yup.

Hosts is a file in both windows and linux operating systems that provides host name to I.P. translation. This is useless for the average user and would be annoying to mess with on multiple computers but if it's just one computer you could use it to redirect to another webpage.

All the other stuff in the chapter like NetBIOS takes place on Windows so I can't try it out here.

Thrilling post wasn't it?

Monday, April 12, 2010

Damn you Windows (not really)

I just started the chapter which is all about networking commands. Problem is the outputs are different than the ones I should be getting in the book. Oh well.

tracert or traceroute in linux traces the route to the remote device. Har har. More than that it lists every DNS and I.P. it takes to the receiver including the time it takes between hops.

ipconfig (ifconfig in linux) is something I used a lot before. Ifconfig lists everything about the current machines network configuration like it's I.P., DNS, default gateway, MAC addresses and all that good stuff.

ping. You know what that is. Yes you do. Ok fine I'll explain it. Ping just sends an ICMP packet to a host to make sure it's reachable. You can set different parameters like how many echo requests, force IPV6 yadda yadda.

More commands soon...

Friday, April 9, 2010

This Week

Monday I made my proposal. No not that kind. Stuff happened on Tuesday and Wednesday... Thursday I read and today I read some more. So here's what I read.

Frame Relay is a type of WAN connection based on packet switching. Packet switching basically means sending packets through different paths. It operates on virtual circuits where each client has their own allocated bandwidth. When you send data you have the CIR and the Access Rate. The CIR determines how much data is sent before it might be dropped and the Access Rate well... it's the Access Rate aka bandwidth. Back to virtual circuits they're just like a physical on except virtual so your data is moving across a large infrastructure that it never sees because it looks like it's own circuit. There are two types of virtual circuits. Permanent and switched which is like a leased phone line. The permanent is always in place while the switched is like when you make a phone call the connection is established and then dropped when the call or connection is through.

That's just one of the WAN types I read about but I'd rather not regurgitate everything I just read.

Gonna try and fix my dad's grandfather clock this weekend. I MUST HEAR THAT WESTMINSTER CHIME!
You gotta skip to the 28 second mark...

Monday, April 5, 2010

Fourth Quarter Project Proposal

This Quarter my overall goal is to take the Network+ exam and pass. I'm confident now, not 100% but I have only 4 chapters to go through. I still have to go back to learn subnetting and the layers of the OSI but I have the basics down for both.

Network+ exam... To be honest I could have probably been ready by now but last quarter I just slowed down and then sped up again. Of course I'm still going to be reading the Net+ book (which I can't link) using Wikipedia for obscure things and doing the practice exams. We should talk about getting a voucher for the Net+ exam when you come back. So I'll take the exam when I can which I hope can be soon.

Right after I take the exam which is TBD I assume I'll start helping Henry or something of the sort. There won't be enough time to work out a full project especially if I take the exam late. If I get the date for the exam I could make a timeline but right now I can't really see what I'd be doing.

If I don't have a lot of time left in the quarter after I take the exam I may try programming again for the summer. I'm still keeping my options open for a career.

Thursday, March 25, 2010


Engaging and interesting opening sentence. A WAN is hard to differentiate from a LAN since they're both based on the size and distance of the network. The book says that a WAN is a network than spans great distances and use phone providers for communication within the WAN. There are 3 different WAN connection types.

Leased Line is a point-to-point connection. It is a permanent line over a long distance.

Circuit Switching is used for telephones and can be used for data as in dial-up. You only pay for the time you use not the data since you have to open up a connection.

Packet Switching
is like a LAN where data sent over a half-duplex connection only one person is sending data at once. Packet Switching is basically on a large scale, so data is sent in bursts. It's not that good since you usually share it with other companies and if you need continuous connection it won't do you any good at all.

So have a happy spring break blah blah gonna go to sleep.

Wednesday, March 24, 2010


Not the kind for surfing porn (Henry) on the school network but it's somewhat like that. A proxy server handles all packets moving out of the network. They can dissect a packet which allows you to filter what packets move in and out of the network based on keywords and can even scan for viruses although the more thorough the slower the network will be. Proxies can also hide the I.P.s of computers within the network so hackers can't target a specific machine.

An HTTP proxy is the kind we all know and love. The way it works is a client is configured to sent all HTTP requests to the proxy so when a client accesses a website the request is sent to the proxy and then returns the website to the original sender. This can be useful when you need to get around restrictions like region or network based. An HTTP proxy can also be configured to cache web pages so bandwith isn't wasted on frequently requests web pages. Pretty much everything said applies to any kind of proxy like an FTP proxy.

Monday, March 22, 2010

Network Security

Firewalls. Firewalls basic function is to allow or deny packets based on security restrictions. They can be a combination of software and hardware. A router usually has a firewall but you can also have a machine running as a firewall exclusively known as a network-based firewall. A host-based firewall runs on each machine protecting only that machine (although I guess it could protect from outbound attacks). An ACL is literally what it's called an, an Access Control List. It allows you to control what kind of packets move around inbound and outbound based on conditions specified like in programming. And lastly DMZ. A Demilitarized Zone is a subnet where you store all non critical information behind a firewall. An example is you have a website for ducks. You have your webserver in the DMZ where people can get access to your website and all things duck related but you keep your corporate servers in another subnet with a stricter restrictions.

Friday, March 19, 2010


DoS. Denial of Service. A DoS is usually used to stop people from accessing a network by flooding it with all sorts of crap. One way someone might DoS your server is by repeatedly pinging your server with large ICMP packets. Another form is apparently called smurfing... It's the same thing as repeatedly pinging except you spoof the victim I.P. and then pinging the broadcast address so every machine on the network is screwed. The last one is a SYN Flood where a server is sent a barrage of SYN packets requesting to start communication and won't communicate with any other requests until it can deal with the current ones. There's also DDoS which is just a DoS attack from multiple machines attacking multiple targets. So that's it. Gonna go cut my fingers on some Legos.

Thursday, March 18, 2010

Lego WeDo

I took some times yesterday do play around with it. A problem I had yesterday and the day I first got the kit was I could not start the WeDo software without something in each USB port. Kevin experienced the same problem. He said he couldn't get the software working until that day when we met. The strange thing is right now I tried it out and the software worked fine. Usually the software starts, there's a beep and it goes to the main menu.

My brother has his own XO and will be keeping his own blog. We're both running the same OS, same firmware, both freshly installed and yet we seem to have different problems. One thing I want to know is if you don't want anyone else seeing this stuff so we should make everything private or if you don't care.

Wednesday, March 17, 2010

User Accounts and WeDo

User accounts. Don't use password as your password. That is all.

Ok now for the real stuffs. RADIUS. A while back we were considering having any wireless device authenticate itself with a username and password and be presented with a captive portal. That pretty much went kaput since it wasn't really necessary, and WPA-2 suits us fine. Anyway RADIUS is basically a way of authenticating users and giving them restricted access to resources. Kerberos is an authentication protocol that can be used on top of RADIUS. I mention Kerberos since it is used for authentication and everyone has heard of it even if they don't know what it means. Kerberos is also meant for a regular corporate network versus RADIUS which is used by ISPs to allow authentication of their users anywhere.

WeDo post in a bit...

Thursday, March 11, 2010

The Week

I decided to take on the super secret task that I can't speak about because it's so super secret, I think. Anyways because of this super secret task Mr. Elkner gave me an XO since it is part of the super secret task. I didn't take long how to figure it out considering it's meant for kids although there was some difficulty like opening the darn thing. I actually used it for reading the book while on the bus, which I'll get to in a bit, and it was very nice. The screen doesn't glare and for the most part it was easy to use but anyways...

I left off last week with some Wi-Fi related things. The next part of the chapter is mostly hardware. In a purely Wi-Fi network there will be two components the WAP (Wireless Access Point) and a wireless NIC. You can have a wired and wireless router. A WAP will have either Omni or Yagi antennas. An omni directional antenna transmits all over the place while a Yagi antenna transmits in one direction but has greater distance than an omni antenna since it focuses all its power in one direction.

Onto networks! In a small network instead of buying a WAP you can have all your NICs operate in ad-hoc mode where each device communicates directly to each other instead through a WAP although this isn't very good since a WAP is cheap and it's hard to organize this kind of network. The other type involves a WAP which is basically like a wired network except wireless.

The cool thing about wireless networks is you can obviously move around in them since your device is wireless but you can blanked areas with WAPs so that you can move around freely and the network is fault tolerant since they will overlap each other to an extent. Unfortunately for a couple pages the book discusses how to set up a wireless connection and WAP in Windows. Not that I hate Windows (I use it more than Linux...) the book should be discussing the concepts and I would hope someone reading the book would know how to connect to a wireless network off the bat. Anyways...

The next part discusses wireless security which I will talk about next week after the super secret thing. Oh yes the super secret thing involves Legos.

Friday, March 5, 2010

The Week

Monday (or Tuesday?) me, Henry and Louie moved the back row of computers to the front and vice versa since Mr. Elkner wanted to be able to scribble stuff on the smartboard and the people working on the Google App Engine can see it.

Tuesday I stayed after school with Henry to check on the other 2 of 8 servers we received last week. One of the servers had 2 borked fans of 4 and the second one was fine. Henry did the server install since he had a copy of the desktop release and I left before he got the server one.


Thursday I read some more about wireless standards. The chapter discusses the different 802.11 standards with the main ones being 802.11, 802.11a, 802.11b, 802.11g and 802.11n. So within the U.S. of A, Europe and everywhere else the 2.4GHz and 5GHz ranges contain channels which are open to the public so we may enjoy creating our own wireless network. As you can guess 802.11 was the first standard ratified. The more important ones are the b, g and n although b isn't as important anymore which I will explain.

So b and g both operate on the 2.4GHz range. b can give you a maximum of 11Mbps. The reason it has such low bandwith is because it uses the Ethernet based collision detection CSMA/CA. Basically each packet sent requires an acknowledgment which consumes resources.

As stated previously g also operates in the 2.4GHz range and has a maximum data rate of 54Mbps. It uses some magic called Direct Sequence Spread Spectrum that is further into the chapter. The cool the about g is that g devices are backwards compatible with b devices. The bad thing is a b device only works when the other device is in b mode so even if you have 5 g devices and 1 b device connecting to some b/g AP ALL the devices will be limited to 11Mbps.

n is a somewhat recently ratified standard that operates in both the 2.4 and 5 GHz ranges. The biggest thing about n is MIMO is implemented in it which allows a AP to have up to 8 antennas for Multiple-Input Multiple-Output (MIMO).

I'll be back with more wireless standard goodness for next week!

Thursday, February 25, 2010

Yesterday me and Henry found out that some of the laptops were using our wireless connection instead of the wired which led to some of them having I.P.s not specified by MAC. It became obvious when we tried connecting to some of them and we couldn't since they weren't plugged into ethernet which the I.P.s were assigned to. We decided that I should try assigning the same I.P. to both the wired and wireless NIC and hope it works. Fortunately it did. When I did ifconfig it showed that the I.P. was assigned to both cards. To be sure that both worked and it was just one connected I tried disconnecting and connecting both while on the internet and the connection worked without a noticeable hitch. As long as it works.

Thursday, February 18, 2010

Woo Vector Graphics and Inkscape

So I finished up the chart which shows the physical layout of the lab and the I.P. of each computer. Elkner actually made the chart I just changed text. I'm not gonna embed it since it requires a plugin and I hate to have a nagging missing plugin pop up when you don't want to install it.

Also I lied about finishing the chart sort of. Apparently there is a problem with either inkscape or the .svg file format where some text boxes appear as black boxes. The great part about this problem is I have no idea how to delete the boxes since the normal method of selecting one doesn't work. So, here's is in it's black box covered glory. Cool beans.

OH WAIT. I forgot to explain what and .svg file is in the first place. Basically it's a vector graphic image which means it scales to whatever size so if you wanted to show a smiley face on a giant superbowl screen in all it's smiley glory it wouldn't blur at all since it's based on algorithms rather than the position of each individual picture. I think.

Thursday, February 4, 2010

Wed nes day

Wed nes day I stayed after school to create the fixed leases for every computer in the lab. Before we started we tested it one computer on an I.P. outside of the range. It worked and for extra precaution I backed up IPCop config just in case it didn't work and I needed to restore it in an instant. I went around the lab in a pattern taking note of each MAC address. I then inputed each I.P and MAC using the web interface. It was slow and would have been easier editing the file directly but I probably would have taken a while just accessing it. Now I can harass Aaron without worrying about messing with someone else's computer. I pledge to use this power only for good!

Friday, January 29, 2010


tuesday I will stay after school and do that DHCP thing I keep saying I will do that I kept saying. Contrary to my hopes about the chapters staying the same length/difficulty it's become longer/harder (har har). I found reading from the the ebook doesn't cut it anymore and I have to sit down with the actual monstrosity and read it. Reading from a real book has always been easier for me than a glaring computer screen. That's it. Also for those who don't read xkcd yet read this here's a touching/depressing comic...

Thursday, January 28, 2010


So here's a boring and non thrilling post. I haven't really posted anything informative since a lot of it has been really boring lately and you wouldn't want me to bore you, right? Well now it is unavoidable so prepare to be bored(ed?) and bombarded with facts. Fun.

A hub and switch are like the same thing except they aren't. Both can segment networks but a switch can basically do it better. There really isn't a reason to use a hub over a switch unless you don't have the monies or you already have a bunch of hubs but even then one switch would be good. When the hub receives data it bombards every port with the data and those computers that weren't meant to recieve the data get rid of it. It's a waste of bandwidth but there is no processing involved which isn't really a surprise when you just throw crap at everyone. A switch is basically the opposite. It bombards each port so it can discover which MAC is assigned to each port so then later on it can just send the data to that specific port without wasting bandwidth.

I'll make another post tomorrow.

Wednesday, January 20, 2010

Third Quarter

I guess I forget to hit the post button yesterday... Anyways.

Guess what I'll be doing next quarter? Networking! I'm am hopeful and completely very mostly somewhat slightly confident that I will be ready for the Net+ exam by the beginning of 4th quarter. I haven't been reading the book in order but I would say I'm 60 to 70 percent of the way there as long as the chapters don't get any harder or denser. So, yeah.

Tuesday, January 12, 2010

Quarter 2 Review

During the quarter I learned stuff. And things.

First thing of the quarter was getting the whitelist ready which feels like a million years ago. It taught me how painful it can be if you don't read the documentation. I "learned" the OSI model as in I understand how it works, encapsulation and what not. I still need to memorize the layers. I learned about all the fun and exciting cables like Fiber, Cat and Dog. (Didya see the joke in there? Cat and Dog? Geddit? Shut up). I now know how to convert number to binary to hex. I also learned a pretty couple minor things like using SCP but that's not really worth listing. So that's it.

As you can guess my next quarter goal will be to continue working with Net+.

Strange how I can write a page on one topic and yet when making a review I can only come up with one paragraph.

Friday, January 8, 2010

Happy 8 Day Old New Year

First post of the New Year.

Over the break I ate a lot, played video games, researched LDAP and had a small epiphany, although that's for another day. I searched the internet trying to find a guide on how to setup an LDAP server. There is a lot of info out there but it's all gibberish to someone new to it like me. I guess they expect you to have some prior knowledge which means nobody ever makes some good documentation. I did find a decent guide on the Ubuntu website which guides you through making an LDAP server which is very nice but even if I did set up the server when Mr. Elkner comes to me and says he wants something not part of the guide or something breaks I'll be screwed so I would need to learn more about LDAP.

So during the course of the week I learned how to convert numbers to binary, binary to hex blah blah. It actually isn't hard. The only problem is it's somewhat time consuming. I had to learn about it since hex relates to MAC addresses and binary since IP address are based on binary and so are subnets. Subnetting is probably one of the most important things you need in a large network in my opinion. I would go into a long winded explanation on why it's important but I have yet to fully understand it. Basically you can't just physically set up your network without ignoring subnetting. Once I finish the chapter and know what I'm talking about I'll make a post on it.

I know I'm forgetting something but I can't remember right now and according to some study which I can't remember where I read it said that trying hard to remember something decreases your chances of remembering what you forgot. So don't think too hard about what you forgot or you'll forget it.

Also Chuck season 3 starts this Sunday but you already knew that.