Skip to main content

Sensor networks, circa 1967

What was the first sensor network? Thinking back, I bet most people would guess the early demos by Berkeley and UCLA at places like the Intel Developer's Forum; the Twentynine Palms air-dropped mote demo; and Great Duck Island. These were all around 2002 or so. It turns out this is off by about 35 years -- the first bona fide wireless sensor network was actually deployed in Vietnam, along the Ho Chi Minh Trail, in 1967. It was called Igloo White.

I've been doing a lot of reading about Igloo White lately, and while most of the information on the program is still classified, there are a bunch of articles, books, and a few websites that provide some useful details. Igloo White was a system designed to use seismic and acoustic sensors to detect PAVN movements along the Ho Chi Minh Trail from North Vietnam, through Laos, and into South Vietnam (thereby skirting the DMZ). It consisted of wireless sensors, typically dropped from helicopters and aircraft. In all nearly 30,000 sensors were deployed during the war. (The image above shows an airman about to drop an Igloo White sensor from a helicopter.)

You can read much more about Igloo White elsewhere, but some juicy tidbits that caught my attention. The sensors themselves were shaped like large artillery shells and buried themselves in the ground, with a radio antenna designed to look like the surrounding jungle foliage. They used lithium batteries with an expected lifetime of 30 days. Each sensor performed simple local thresholding and triggered a movement alarm when a certain level of activity was detected. (All of this was done with analog circuity: at the time a digital computer was the kind of thing that filled a good portion of a room.)

The sensors would transmit a 2W signal on the UHF band which would be picked up by orbiting EC-121R aircraft that flew 24/7 on rotating 18-hour missions over various parts of the sensor field. The personnel on board would listen to the transmitted acoustic signals, and attempt to classify the targets. They could even identify specific trucks based on the acoustic signature of the engine. Detections were relayed to a top-secret SIGINT center at Nakhon Phanom, Thailand, where the data was stored by IBM 360 computers and processed largely by human analysts. The analysts would then call in air strikes against the targets. Note that in many cases the bombing runs occurred at night, using ground-based RADAR for bomb guidance, so the pilots never even saw what they were hitting. Presumably they could go from target to detection to, ahem, interdiction, in less than five minutes.

The most incredible thing about Igloo White was the sheer amount of resources that were poured into the program: the whole thing went from concept to reality in little more than a year, and cost more than a billion dollars. The system relied entirely on human analysts to interpret the sensor data, which was often noisy; later in the program the North Vietnamese became adept at spoofing the sensors. The number of people involved in deploying the sensors, monitoring the signals, and interpreting the data was almost inconceivable; the operations center in Thailand was the largest building in Southeast Asia at the time, complete with air conditioning, positive pressure, and airlocks to prevent contaminants from damaging the computers inside. This is not the kind of thing you're going to do with two grad students and a half-million in NSF funding!

The I-REMBASS system from L3 Communications represents the state-of-the-art in (deployed) military sensor networks. And, of course, much of the modern work on sensor nets came out of DARPA programs such as NEST. Still, it's fascinating to see the history behind this technology.

Comments

  1. Yeah, you should talk with Paul Horowitz, who is one of the "Jasons" -- they were the ones who got the idea for all this. But there was a lot of bad fall out for them, as detailed in the book by Anne Finkbeiner.

    ReplyDelete
  2. I think that Dick Garwin was the Jason who came up with the idea for Igloo White and pushed it.

    The first "meshy" WSN that I know of was the brainchild of Lacoss at Lincoln Labs in the late 70s. You can find their final report (1986) here:
    http://tinyurl.com/ye68qjk

    For those interested in the 2001 Twentynine Palms deployment, it's described here:
    http://tinyurl.com/ydnjjyz

    ksjp

    ReplyDelete
  3. two grad students and 500k in funding.. right! wonder how academics can be so optimistic about "changing the world" with their ever-so-desperate rat-race to publish in tier-1 conferences based on some last minute toy evaluation of their "designs". take a break from serial-writing papers, and do some impactful work (ah.. and spare those its-the-long-term-impact fairy tales, please)!

    ReplyDelete
  4. I am a Vietnamese. I am also a grad student doing research in WSN (so, your blog is one of my most favourite blogs).

    Today, after reading this blog entry, I feel so sad for what some people have done. Whenever I present a problem and propose my new idea to my advisor, he askes me questions like: "is this problem really useful for human ? can you give more convincing examples to show that people really need the problem solved ?". But, over 40 years ago, more than a billion dollars was used to make WSN a mean to kill people. Oh, WSN! Someday, your great, current idea, in your paper, can be used for a very different (and could be bad) purpose :-(

    Just ignore my comment!

    ReplyDelete
  5. No wonder this failed it: "(All of this was done with analog circuity)"
    If any of you viewers have an idea for digital CIRCUITRY for sensors that would fit armored discs that are 1 nanometre thick by 25 nanometres in diameter let me know.

    ReplyDelete
  6. If any of you viewers have an idea for digital CIRCUITRY for sensors that would fit armored discs that are 1 nanometre thick by 25 nanometres in diameter let me know.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why I'm leaving Harvard

The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in my career, and one that I have spent a tremendous amount of time mulling over the last few months.

Rather than let rumors spread about the reasons for my move, I think I should be pretty direct in explaining my thinking here.

I should say first of all that I'm not leaving because of any problems with Harvard. On the contrary, I love Harvard, and will miss it a lot. The computer science faculty are absolutely top-notch, and the students are the best a professor could ever hope to work with. It is a fantastic environment, very supportive, and full of great people. They were crazy enough to give me tenure, and I feel no small pang of guilt for leaving now. I joined Harvard because it offered the opportunity to make a big impact on a great department at an important school, and I have no regrets about my decision to go there eight years ago. But m…

Rewriting a large production system in Go

My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go. I say "almost" because one component of the system -- a library for transcoding between image formats -- works perfectly well in C++, so we decided to leave it as-is. But the rest of the system is 100% Go, not just wrappers to existing modules in C++ or another language. It's been a fun experience and I thought I'd share some lessons learned.

Why rewrite?

The first question we must answer is why we considered a rewrite in the first place. When we started this project, we adopted an existing C++ based system, which had been developed over the course of a couple of years by two of our sister teams at Google. It's a good system and does its job remarkably well. However, it has been used in several different projects with vastly different goals, leading to a nontrivial accretion of cruft. Over time, it became apparent that for us to continue to innovate rapidly wo…

Running a software team at Google

I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds like a big step down. Job titles aside, I'm much happier and more productive in my new role than I was in the 8 years at Harvard, though there are actually a lot of similarities between being a professor and running a software team.

I lead a team at Google's Seattle office which is responsible for a range of projects in the mobile web performance area (for more background on my team's work see my earlier blog post on the topic). One of our projects is the recently-announced data compression proxy support in Chrome Mobile. We also work on the PageSpeed suite of technologies, specifically focusing on mobile web optimization, as well as a bunch of other cool stuff that I can't talk about just yet.

My official job title is just "software engineer," which is the most common (and coveted) role at Google. (I say "coveted&quo…