Slashdot

Syndicate content Slashdot
News for nerds, stuff that matters
Updated: 10 min 5 sec ago

TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








TCP/IP Might Have Been Secure From the Start If Not For the NSA

Fri, 04/04/2014 - 7:50pm
chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

Read more of this story at Slashdot.








Five-Year-Old Uncovers Xbox One Login Flaw

Fri, 04/04/2014 - 7:08pm
New submitter Smiffa2001 writes: "The BBC reports that five-year-old Kristoffer Von Hassel from San Diego has uncovered a (frankly embarrassing) security flaw within the Xbox One login screen. Apparently by entering an incorrect password in the first prompt and then filling the second field with spaces, a user can log in without knowing a password to an account. Young Kristoffer's dad submitted the flaw to Microsoft — who have patched the flaw — and have generously provided four free games, $50, a year-long subscription to Xbox Live and an entry on their list of Security Researcher Acknowledgments."

Read more of this story at Slashdot.








How Many People Does It Take To Colonize Another Star System?

Fri, 04/04/2014 - 6:25pm
Hugh Pickens DOT Com writes: "The nearest star systems — such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home — are so far away, reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. Anthropologist Cameron Smith has calculated how many people would be required to maintain genetic diversity and secure the success of the endeavor. William Gardner-O'Kearney helped Smith build the MATLAB simulations to calculate how many different scenarios would play out during interstellar travel and ran some simulations specially to show why the success of an interstellar mission depends crucially on the starting population size. Gardner-O'Kearny calculated each population's possible trajectory over 300 years, or 30 generations. Because there are a lot of random variables to consider, he calculated the trajectory of each population 10 times, then averaged the results. A population of 150 people, proposed by John Moore in 2002, is not nearly high enough to maintain genetic variation. Over many generations, inbreeding leads to the loss of more than 80 percent of the original diversity found within the hypothetical gene. A population of 500 people would not be sufficient either, Smith says. "Five hundred people picked at random today from the human population would not probably represent all of human genetic diversity . . . If you're going to seed a planet for its entire future, you want to have as much genetic diversity as possible, because that diversity is your insurance policy for adaptation to new conditions." A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation. Luckily, tens of thousands of pioneers wouldn't have to be housed all in one starship. Spreading people out among multiple ships also spreads out the risk. Modular ships could dock together for trade and social gatherings, but travel separately so that disaster for one wouldn't spell disaster for all. 'With 10,000,' Smith says, 'you can set off with good amount of human genetic diversity, survive even a bad disease sweep, and arrive in numbers, perhaps, and diversity sufficient to make a good go at Humanity 2.0.'"

Read more of this story at Slashdot.








How Many People Does It Take To Colonize Another Star System?

Fri, 04/04/2014 - 6:25pm
Hugh Pickens DOT Com writes: "The nearest star systems — such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home — are so far away, reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. Anthropologist Cameron Smith has calculated how many people would be required to maintain genetic diversity and secure the success of the endeavor. William Gardner-O'Kearney helped Smith build the MATLAB simulations to calculate how many different scenarios would play out during interstellar travel and ran some simulations specially to show why the success of an interstellar mission depends crucially on the starting population size. Gardner-O'Kearny calculated each population's possible trajectory over 300 years, or 30 generations. Because there are a lot of random variables to consider, he calculated the trajectory of each population 10 times, then averaged the results. A population of 150 people, proposed by John Moore in 2002, is not nearly high enough to maintain genetic variation. Over many generations, inbreeding leads to the loss of more than 80 percent of the original diversity found within the hypothetical gene. A population of 500 people would not be sufficient either, Smith says. "Five hundred people picked at random today from the human population would not probably represent all of human genetic diversity . . . If you're going to seed a planet for its entire future, you want to have as much genetic diversity as possible, because that diversity is your insurance policy for adaptation to new conditions." A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation. Luckily, tens of thousands of pioneers wouldn't have to be housed all in one starship. Spreading people out among multiple ships also spreads out the risk. Modular ships could dock together for trade and social gatherings, but travel separately so that disaster for one wouldn't spell disaster for all. 'With 10,000,' Smith says, 'you can set off with good amount of human genetic diversity, survive even a bad disease sweep, and arrive in numbers, perhaps, and diversity sufficient to make a good go at Humanity 2.0.'"

Read more of this story at Slashdot.








How Many People Does It Take To Colonize Another Star System?

Fri, 04/04/2014 - 6:25pm
Hugh Pickens DOT Com writes: "The nearest star systems — such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home — are so far away, reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. Anthropologist Cameron Smith has calculated how many people would be required to maintain genetic diversity and secure the success of the endeavor. William Gardner-O'Kearney helped Smith build the MATLAB simulations to calculate how many different scenarios would play out during interstellar travel and ran some simulations specially to show why the success of an interstellar mission depends crucially on the starting population size. Gardner-O'Kearny calculated each population's possible trajectory over 300 years, or 30 generations. Because there are a lot of random variables to consider, he calculated the trajectory of each population 10 times, then averaged the results. A population of 150 people, proposed by John Moore in 2002, is not nearly high enough to maintain genetic variation. Over many generations, inbreeding leads to the loss of more than 80 percent of the original diversity found within the hypothetical gene. A population of 500 people would not be sufficient either, Smith says. "Five hundred people picked at random today from the human population would not probably represent all of human genetic diversity . . . If you're going to seed a planet for its entire future, you want to have as much genetic diversity as possible, because that diversity is your insurance policy for adaptation to new conditions." A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation. Luckily, tens of thousands of pioneers wouldn't have to be housed all in one starship. Spreading people out among multiple ships also spreads out the risk. Modular ships could dock together for trade and social gatherings, but travel separately so that disaster for one wouldn't spell disaster for all. 'With 10,000,' Smith says, 'you can set off with good amount of human genetic diversity, survive even a bad disease sweep, and arrive in numbers, perhaps, and diversity sufficient to make a good go at Humanity 2.0.'"

Read more of this story at Slashdot.








How Many People Does It Take To Colonize Another Star System?

Fri, 04/04/2014 - 6:25pm
Hugh Pickens DOT Com writes: "The nearest star systems — such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home — are so far away, reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. Anthropologist Cameron Smith has calculated how many people would be required to maintain genetic diversity and secure the success of the endeavor. William Gardner-O'Kearney helped Smith build the MATLAB simulations to calculate how many different scenarios would play out during interstellar travel and ran some simulations specially to show why the success of an interstellar mission depends crucially on the starting population size. Gardner-O'Kearny calculated each population's possible trajectory over 300 years, or 30 generations. Because there are a lot of random variables to consider, he calculated the trajectory of each population 10 times, then averaged the results. A population of 150 people, proposed by John Moore in 2002, is not nearly high enough to maintain genetic variation. Over many generations, inbreeding leads to the loss of more than 80 percent of the original diversity found within the hypothetical gene. A population of 500 people would not be sufficient either, Smith says. "Five hundred people picked at random today from the human population would not probably represent all of human genetic diversity . . . If you're going to seed a planet for its entire future, you want to have as much genetic diversity as possible, because that diversity is your insurance policy for adaptation to new conditions." A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation. Luckily, tens of thousands of pioneers wouldn't have to be housed all in one starship. Spreading people out among multiple ships also spreads out the risk. Modular ships could dock together for trade and social gatherings, but travel separately so that disaster for one wouldn't spell disaster for all. 'With 10,000,' Smith says, 'you can set off with good amount of human genetic diversity, survive even a bad disease sweep, and arrive in numbers, perhaps, and diversity sufficient to make a good go at Humanity 2.0.'"

Read more of this story at Slashdot.








How Many People Does It Take To Colonize Another Star System?

Fri, 04/04/2014 - 6:25pm
Hugh Pickens DOT Com writes: "The nearest star systems — such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home — are so far away, reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. Anthropologist Cameron Smith has calculated how many people would be required to maintain genetic diversity and secure the success of the endeavor. William Gardner-O'Kearney helped Smith build the MATLAB simulations to calculate how many different scenarios would play out during interstellar travel and ran some simulations specially to show why the success of an interstellar mission depends crucially on the starting population size. Gardner-O'Kearny calculated each population's possible trajectory over 300 years, or 30 generations. Because there are a lot of random variables to consider, he calculated the trajectory of each population 10 times, then averaged the results. A population of 150 people, proposed by John Moore in 2002, is not nearly high enough to maintain genetic variation. Over many generations, inbreeding leads to the loss of more than 80 percent of the original diversity found within the hypothetical gene. A population of 500 people would not be sufficient either, Smith says. "Five hundred people picked at random today from the human population would not probably represent all of human genetic diversity . . . If you're going to seed a planet for its entire future, you want to have as much genetic diversity as possible, because that diversity is your insurance policy for adaptation to new conditions." A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation. Luckily, tens of thousands of pioneers wouldn't have to be housed all in one starship. Spreading people out among multiple ships also spreads out the risk. Modular ships could dock together for trade and social gatherings, but travel separately so that disaster for one wouldn't spell disaster for all. 'With 10,000,' Smith says, 'you can set off with good amount of human genetic diversity, survive even a bad disease sweep, and arrive in numbers, perhaps, and diversity sufficient to make a good go at Humanity 2.0.'"

Read more of this story at Slashdot.